This is wonderful to hear! Every time I've talked with the Norwegian EA community, there has been a strong sense of integrity, ethics, and a will to do good along with tangible results. I'm consistently impressed and it's wonderful to see that effort get rewarded and for that impact to get a public voice.
Yonatan can probably help you figure out what to use that skillset for and your goals with it but web development process nuances are available from many other resources, e.g:
The above video uses a tech stack with Svelte, Postgres, Vercel, and Gitpod and represents the favorite programming paradigm for many modern developers.
We had a hackathon a month ago with some pretty interesting projects that you can build out further: Results link. These were made during 44 hours (average 17 hours spent per participants) but some of the principles seem generalizable.
You're also very welcome to check out aisi.ai or the intro resources for the interpretability hackathon (resources) running this weekend or the previous hackathon (resources).
There's an aggregated list of AI safety research projects available on AI Safety Ideas (forum post) and though it's a bit messy in there at the moment, it should be quite high quality leads for a hackathon as well! E.g. Neel Nanda and I will add a bunch of project ideas to the Interpretability Hackathon list during the next couple of days.
Loads, and a lot that we have updated for the interpretability hackathon as well!
Regarding logistics (all cons are updated for the next hackathon)
Specifically on GatherTown:
Hope it was helpful! If you want to chat more about it, do hit me up on my calendly.
Uuh, interesting! Maybe I'll do that as a weekend project for fun. An automatic comment based on the whole idea as a prompt.
You raise a very good point that I agree with. Right now, the platform is definitely biased towards the existing paradigm. This will probably be the case during the first few months, but we hope that it will help make the exploration of new directions and paradigms easier at the same time.
This also raises the point of the ideas currently playing into the canon of AI safety instead of looking at the vast literature outside of AI safety that concerns itself with the same topics but with another framing.
So to answer your questions; we want AISI to make it easier to elicit new ideas in all paradigms and directions with our personal bias moving that more towards new perspectives as we implement better functionality.