Who is actually working on producing and passing good AI laws? In Europe and elsewhere, and particularly at high jurisdictional levels.
Like, I mean seriously: with foresight, joining experts in all AI supply and value chain with representatives of all major political views, producing pitches tailored to each view, making sure relevant high rank stakeholders know facts (such as the Alibaba AI roguely mining crypto while being trained), red-teaming results and iterating, with a theory of change involving at least all the major actors and countries, etc.
[*edit: Take these as general examples, not as specific requirements. But, eg., without some credible path for expansion many people are not going to buy in any meaningful regulation. And I'm interested in particularly in approaches having the foresight to tackle immediate issues in a way that it is useful to mitigate x-risks.]
Is there anyone doing such an effort? Or is there even anyone willing to finance it, and several instances of this? We all know producing (and then passing!) good AI laws at the right jurisdictional levels is key, but this is not a small enterprise and I don't see much work on it.
I know, eg. the European AI office (even though the EU AI Act was really watered down it is probably the most ambitious attempt so far), but I refer more to non-governmental work (with close collaboration or expected influence to regulators), as official offices have lots of constrains in terms of jurisdiction, perceived political no-goes and the like (eg. we are not allowed to mention arms conflict because this would clash with the military office's duties).
