Does anyone know of work being done around what makes good intellectual laws around technologies that can shape the world in catastrophic ways or mitigate catastrophes. Tight intellectual property laws reduce the number of people that have to be coordinated with in order to control the technology but also can introduce bottle necks in the introduction and usage of technologies (like coordination technologies) that could be used to mitigate risks.
Are there people thinking about this?
Similarly to Geoffrey, I like the way this question is set up but not quite sure I have understood it correctly.
However, I would say as an initial response that the legal approach to AI is so in its infancy that the focus to respond to risk has to be more holistic (see the EU AI Act, which has ‘risk tiers’).
When we think about IP laws, they tend not to play quite the same role in reducing risk. Tight IP might have corollary effects on eg how NLP systems can be trained, but I would need to have a good think to uncover whether, if at all, intellectual property laws could have such an effect. Would love to hear your thoughts however!