Mauhn is a company doing research in AGI with a capped profit structure and an ethics board (represented by people from outside the company). Whereas there is a significant amount of AI/AGI safety research, there is still a gap of how to put this into practice for organizations doing research in AGI. We want to help closing this gap, with the following blogpost (written for an audience not familiar with AI Safety) and associated links to relevant documents:

Mauhn AI Safety Vision
This summarizes the most important points Mauhn will commit to towards building safe (proto-)AGI systems

Ethics section of Mauhn’s statutes
The statutes of Mauhn define the legal structure of the ethics board

Declaration of Ethical Commitment
Every founder, investor and employee sign the declaration of ethical commitment before starting a collaboration with Mauhn

We hope that other organizations will adopt similar principles or derivatives thereof. We were a bit short on bandwidth for this first version, but we want to include more feedback from the AI safety community for future versions of these documents. Please drop me an e-mail (berg@mauhn.com), if you'd like to contribute to next versions of this work. Probably we'll update the documentation once per year.

4

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 5:28 PM

I think it's great  that  you're trying to lead by example and I think concrete ideas of how companies can responsibly deal  with the potential for leading the development of advanced or even  transformative AI systems is really welcome in my view. I skimmed three of your links and thought that all sounded basically sensible, though like it will probably all look very different from this, and that I never want to put so much responsibility on anything called an "Ethics Board". (But I'm very basic in my thinking around strategic and government questions around AI, so...). 

One question I had was if you  think it's desirable that AGI will be developed and implemented by a single company, or a group of companies. I think it's probable, but wondered whether there are better  institutions from which an AI Safety person would try to push the AI landscape (e.g. governments, or non-profits, NGOs, international governmental bodies, ...).

Also, your alignment plan sounds like something that still  requires mostly basic research, so I wondered  whether you already have some concrete ideas of concrete research projects  to make  progress here.

Alignment through Education: educate AI systems, just like we educate our children, to allow AI systems to learn human values, e.g. through trial and error.

Also, not  sure why you didn't get feedback here so far, maybe consider crossposting it to lesswrong.com, too.

If it's not the ethics board that has the responsibility, it is either the company itself or the government (through legislation). The first one is clearly not the safest option. The main problem with the second one is that legislation has two problems: (1) it is typically too late before first incidents occur, which may be dramatic in this case and (2) it is quite generic, which wouldn't work well in a space where different AGI organizations have completely different algorithms to make safe. Although an ethics board is not perfect, it can be tailor-made and still be free of conflict of interest.

I agree that it would be more desirable that AGI would be developed by a collaboration between governments or non-profit institutions. With my background, it was just a lot easier from a pragmatic perspective to find money through investors than through non-profit institutions.

Yes, the alignment system is still quite basic (although I believe that the concept of education would already solve a significant number of safety problems). The first gap we focus on is how to optimize the organizational safety structure, because this needs to be right from the start: it's really hard to convince investors to become a capped profit company for instance if you're already making a lot of profits. The technical AI safety plan is less crucial for us in this phase, because the networks are still small and for classification only. It goes without saying that we'll put much more effort into a technical AI safety plan in the future.
 

I didn't ask yet for feedback, because of reasons of bandwidth: we're currently a team of three people with a lot to get done. We're happy to post this first version, but we also need to move forward technically. So, getting more feedback will be for 1-2 years or so

Curated and popular this week
Relevant opportunities