BS

Berg Severens

3 karmaJoined Jul 2021

Posts
1

Sorted by New

Comments
1

If it's not the ethics board that has the responsibility, it is either the company itself or the government (through legislation). The first one is clearly not the safest option. The main problem with the second one is that legislation has two problems: (1) it is typically too late before first incidents occur, which may be dramatic in this case and (2) it is quite generic, which wouldn't work well in a space where different AGI organizations have completely different algorithms to make safe. Although an ethics board is not perfect, it can be tailor-made and still be free of conflict of interest.

I agree that it would be more desirable that AGI would be developed by a collaboration between governments or non-profit institutions. With my background, it was just a lot easier from a pragmatic perspective to find money through investors than through non-profit institutions.

Yes, the alignment system is still quite basic (although I believe that the concept of education would already solve a significant number of safety problems). The first gap we focus on is how to optimize the organizational safety structure, because this needs to be right from the start: it's really hard to convince investors to become a capped profit company for instance if you're already making a lot of profits. The technical AI safety plan is less crucial for us in this phase, because the networks are still small and for classification only. It goes without saying that we'll put much more effort into a technical AI safety plan in the future.
 

I didn't ask yet for feedback, because of reasons of bandwidth: we're currently a team of three people with a lot to get done. We're happy to post this first version, but we also need to move forward technically. So, getting more feedback will be for 1-2 years or so