Background
The IMDA launched the inaugural Model AI Governance Framework for Agentic AI (“GFAAI”) on 22 January 2026. This regulatory framework addresses the unique risks of autonomous AI agents that can take actions on behalf of users, access sensitive data, and modify environments, such as making payments. It affects all organizations developing or deploying agentic AI solutions, particularly those in finance and potentially high-autonomy sectors, by mandating new technical and non-technical guardrails.
Key Risks in Agentic AI
Erroneous Action Risk: AI agents’ ability to execute tasks and make changes to the environment introduces the risk of unauthorized or erroneous actions, requiring new control mechanisms.
Unauthorised actions: AI agents operating outside its authorized scope of action, such as bypassing required human approval, violating company policy or standard operating procedure.
Biasness or Unfairness: Decisions or processes that result in systemic inequities, particularly in areas like procurement, grant-making, and hiring, where bias against certain demographic groups can occur.
Sensitive Data Breaches: Agents often require access to confidential information to function, increasing the likelihood for data breaches and misuse.
Disruption to Connected Systems: Agents’ ability to utilize external tools and services, if not properly bounded, can lead to unpredictable and potentially harmful internal system interactions.
Model AI Governance Framework for Agentic AI
Risk Bounding & Scoping: Organizations must proactively assess and limit agents’ autonomy and access to tools and data, to contain potential damages.
Meaningful Human Control: The framework mandates defining significant checkpoints where human approval is required, ensuring humans remain ultimately accountable for agent actions.
Technical Lifecycle Controls: Implementation of technical measures, such as baseline testing and whitelisting of accessible services, is required throughout the agent’s development and deployment lifecycle.
End-User Responsibility: Transparency and mandatory education/training for end-users are required to manage expectations and mitigate the risk of over-trusting the automated system.
For further reading and resources, please refer to the Model AI Governance Framework for Agentic AI.
What’s next?
Management should:
- Assess the risks covered and align with the framework, particularly for financial institutions using agentic AI for trading or compliance.
- Improve the accountability processes by balancing agent efficiency with mandatory human approval checkpoints, slowing down fully autonomous processes.
- Conduct assurance audits to validate that technical controls and risk bounding measures are effective and compliant.
How can we help?
- AI Governance Audit: Assess and align existing AI deployment with the new Agentic AI Framework from IMDA.
- Accountability Mapping: We help financial institutions define clear human accountability and liability structures for agentic AI actions throughout their workflow processes.
And more…
