Hugo Wong

1 karmaJoined


Sorted by New
· · 1m read


For NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, It is suggested to add the following actions under GOVERN 3.2:

-Establish real-time monitoring systems that track the actions and decisions of autonomous AI systems continuously.

-Establish built-in mechanisms for human operators to intervene AI decisions and take control when necessary.

Include “near-miss incidents” in action ID V-4.3-004

-As Action ID MS-2.6-008 is critical in managing high GAI risk systems, It is suggested to include more detailed guidelines on“fail-safe mechanisms”, since fallback and fail-safe mechanisms are different. 

note: Fallback mechanisms aim to maintain some level of operational continuity, even if at reduced functionality. Fail-safe mechanisms prioritize safety over continued operation, often resulting in a complete shutdown or transition to a safe state.


For NIST SP 800-218A, It is suggested to include the following at P.11 Task PS.1.3:

-Document the justification of selection of AI models and their hyperparameters.

After reviewing the report, my comments are as follows:


Opportunities and Enablers:

Democratizing access to AI capabilities and infrastructure - Expanding access to AI tools, datasets, compute resources and educational opportunities, especially for underrepresented groups and regions

Providing incentives for AI applications focused on social good, sustainability and inclusive growth is important. The UN could highlight exemplary AI projects aligned with the SDGs


Risks and Challenges:

Include Intrinsic safety as one of the principle in AI design and risk assessment:

Intrinsic safety is a key principle that should be integrated into the design, development and deployment of AI systems from the ground up to mitigate risks. This means building in safeguards, constraints and fail-safe mechanisms that prevent AI systems from causing unintended harm, even in the case of failures or misuse.


Guiding Principles to guide the formation of new global governance institutions for AI: 

Enforcement Mechanisms - clearly define the authority or mechanisms to enforce their decisions or policies


Institutional Functions that an international governance regime for AI should carry out: 

Referring to the FDA's Adverse Event Reporting System (AERS), it is recommended to establish a similar system as a tool for the global AI safety monitoring program

What is the definition of good?

I think it is a return to the nature of human beings to meet their basic needs for survival and to achieve a prosperous life, where each person realizes his or her potential, where human relationships are good, and where there is harmonious social coexistence. 

This requires a society that is fair, diverse, accepting of different opinions and a healthy ecosystem.  Therefore, I believe that as good ancestors, we need to establish a system that monitors and takes preventive measures for all of these elements to ensure that they are not jeopardized by short-term interests. And when we know that development has deviated from the direction, we must make timely corrections.