Model AI Governance Framework
governmentSingapore's official AI governance framework is frequently cited internationally as a pragmatic, industry-friendly model; relevant for researchers studying national AI policy approaches and technical governance tools.
Metadata
Summary
Singapore's PDPC presents a balanced AI governance framework that promotes innovation while protecting consumer interests, operationalized through AI Verify—a testing toolkit validating AI systems against 11 governance principles including transparency, fairness, safety, and accountability. The framework provides organizations with standardized methods for testing supervised-learning models and generating transparency reports for stakeholders. The associated AI Verify Foundation, backed by Google, IBM, and Microsoft, drives open-source AI testing capabilities as a global reference for responsible AI development.
Key Points
- •Balances facilitating AI innovation with safeguarding consumer interests through a practical, principles-based governance framework.
- •AI Verify toolkit tests supervised-learning models against 11 principles: transparency, fairness, safety, accountability, and others.
- •Organizations can generate standardized testing reports to demonstrate responsible AI use to regulators and stakeholders.
- •AI Verify Foundation operates as a not-for-profit with major tech members (Google, IBM, Microsoft) advancing open-source AI testing.
- •Intended as a global reference point, supporting international alignment on responsible AI governance practices.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Governance and Policy | Crux | 66.0 |
Cached Content Preview
PDPC | Singapore’s Approach to AI Governance
Search
Toggle search bar
Toggle mobile menu
Search
Be aware of scammers impersonating as PDPC officers and report any suspicious activities to the Police. Do not reveal personal information to unknown parties. You may call PDPC (6377 3131) to verify the identity of the alleged PDPC officer. For scam-related advice, visit www.scamshield.gov.sg .
Close notification
As Singapore develops its digital economy, a trusted ecosystem is key - one where organisations can benefit from tech innovations while consumers are confident to adopt and use AI. In the global discourse on AI ethics and governance, Singapore believes that its balanced approach can facilitate innovation, safeguard consumer interests, and serve as a common global reference point.
IMDA had developed AI Verify, an AI governance testing framework and a software toolkit. The testing framework consists of 11 AI ethics principles* which jurisdictions around the world coalesce around, and are consistent with internationally recognised AI frameworks such as those from EU, OECD and Singapore’s Model AI Governance Framework. AI Verify helps organisations validate the performance of their AI systems against these principles through standardised tests.
*The 11 governance principles are transparency, explainability, repeatability/reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency and oversight, inclusive growth, societal and environmental well-being.
The testing processes comprises technical tests and process checks. The AI Verify toolkit is a single integrated software toolkit that operates within the user’s enterprise environment. It enables users to conduct technical tests on their AI models and record process checks. The toolkit then generates testing reports for the AI model under test. User companies can be more transparent about their AI by sharing these testing reports with their shareholders.
AI Verify can currently perform technical tests on common supervised-learning classification and regression models for most tabular and image datasets. AI Verify cannot test Generative AI/LLMs. AI Verify does not set ethical standards, neither does it guarantee AI systems tested will be completely safe or be free from risks or biases.
AI Verify was first developed in consultation with companies from different sectors and of different scales. These companies include AWS, DBS, Google, Meta, Microsoft, Singapore Airlines, NCS/LTA, Standard Chartered, UCARE.AI and X0PA. AI Verify was subsequently released in May 2022 for an international pilot, which attracted the interest of over 50 local and multinational companies, including Dell, Hitachi and IBM.
As AI testing technologies are still
... (truncated, 13 KB total)ed351270124b85e0 | Stable ID: MjIxZWRmMG