Esben Kran

Director @ Apart Research
Working (0-5 years experience)
340Joined Jun


This is wonderful to hear! Every time I've talked with the Norwegian EA community, there has been a strong sense of integrity, ethics, and a will to do good along with tangible results. I'm consistently impressed and it's wonderful to see that effort get rewarded and for that impact to get a public voice. 


Yonatan can probably help you figure out what to use that skillset for and your goals with it but web development process nuances are available from many other resources, e.g:

The above video uses a tech stack with Svelte, Postgres, Vercel, and Gitpod and represents the favorite programming paradigm for many modern developers.

We had a hackathon a month ago with some pretty interesting projects that you can build out further: Results link. These were made during 44 hours (average 17 hours spent per participants) but some of the principles seem generalizable. 

You're also very welcome to check out or the intro resources for the interpretability hackathon (resources) running this weekend or the previous hackathon (resources).

There's an aggregated list of AI safety research projects available on AI Safety Ideas (forum post) and though it's a bit messy in there at the moment, it should be quite high quality leads for a hackathon as well! E.g. Neel Nanda and I will add a bunch of project ideas to the Interpretability Hackathon list during the next couple of days. 

Loads, and a lot that we have updated for the interpretability hackathon as well!

Regarding logistics (all cons are updated for the next hackathon)

  • We had a physical presentation at the only jam site while sharing our screen to GatherTown. This worked well but it was unfortunately not possible to record it for people joining asynchronously (or later) because of simultaneous screen-sharing.
  • We provided them with an array of introductory code and starter templates along with data sources (link). This is highly recommended and we will expand that even more for the upcoming hackathon.
  • We wrote people to only sign up if they really expected to come. We had 29 sign-ups and 15 participants so this pattern seems standard. Of course we'll see what the case is with the interpretability hackathon, though we also have multiple physical locations at the same time.

Specifically on GatherTown:

  • It's quite important to make it conducive to the hackathon experience: A bit fun, hackery, less formal, and have spaces for groups. I think our current GatherTown setup is quite good and you can check it out on the hackathon page
  • It feels pretty important to make the space a bit smaller than the expected participant count will probably need since 1) some participants won't join and 2) it's much more fun to be at an overcrowded event than an undercrowded one. Just before our intro talks, I removed half of the chairs and it felt much more personal.
  • Use GatherTown like a physical space. This was something we did not do enough. Walk around between the groups online. Incentivize the online groups to be present there so you can see some interaction and talk with them if there's any problems. Make sure there's always a volunteer available in the GatherTown space.
  • GatherTown is not good for Q&A etc: We're shifting our intro talk, help, discussions, searching for teams, Q&A into Discord for the upcoming hackathon but will keep the jam space available for all participants to be in GatherTown.

Hope it was helpful! If you want to chat more about it, do hit me up on my calendly.

Uuh, interesting! Maybe I'll do that as a weekend project for fun. An automatic comment based on the whole idea as a prompt.

You raise a very good point that I agree with. Right now, the platform is definitely biased towards the existing paradigm. This will probably be the case during the first few months, but we hope that it will help make the exploration of new directions and paradigms easier at the same time. 

This also raises the point of the ideas currently playing into the canon of AI safety instead of looking at the vast literature outside of AI safety that concerns itself with the same topics but with another framing.

So to answer your questions; we want AISI to make it easier to elicit new ideas in all paradigms and directions with our personal bias moving that more towards new perspectives as we implement better functionality.

Load More