Engineering responsible AI: How Singapore builds trust in emerging technologies
21 November 2025
How is Singapore shaping responsible AI? Explore how GovTech engineers develop safety tools, guardrails, and frameworks to ensure AI is used responsibly across the public sector.

Artificial Intelligence (AI) is now deeply integrated into daily life — from personalised recommendations to digital government services. As models become more advanced — including generative and agentic AI — our understanding of trust, ethics, and accountability must evolve in step.
Singapore’s approach to AI reflects a pragmatic balance between innovation and responsibility. Through close collaboration across agencies and industry, the focus is on ensuring safety, security, and public trust while driving meaningful digital transformation.
This article, as part of the GovTech Decoded series, explores how GovTech engineers and partners are turning responsible AI principles into practice and building model AI governance frameworks that benefit everyone — from testing frameworks to applied use cases.
What does “Responsible AI” really mean?
At its core, Rresponsible AI means developing and deploying systems that deliver societal benefits while minimising risks and harm.
At GovTech, this translates into building safe, secure, and trustworthy AI applications for the public sector. Jessica, who leads Responsible AI at GovTech, shares that six key principles guide this work:
Safety: Ensuring AI development and use do not cause negative consequences.
Fairness: Guarding against bias and discrimination.
Robustness: Ensuring systems perform reliably and as intended.
Security/Privacy: Protecting sensitive or citizen data.
Explainability: Enabling decisions to be understood.
Transparency: Making system behaviour and design traceable.
Adhering to these helps protect against bias, misinformation (AI ‘hallucinations’), and misuse in government systems, fostering digital trust amongst citizens. “If we can ensure that AI systems follow these principles,” Jessica explains, “then we are guaranteed or rather we are more assured that we are actually using it responsibly.”
These principles align with the Model AI Governance Framework (Second Edition) and the Model AI Governance Framework for Generative AI (2024), developed by Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC) to help organisations operationalise trustworthy AI across the lifecycle.
Building Responsible AI in government
GovTech translates these frameworks into engineering practice through playbooks, testing tools, and open-source guardrails.
Responsible AI (RAI) Playbook – a practical guide for public officers on evaluating AI safety. It outlines how to conduct testing, implement guardrails, and assess bias or misinformation risks.
AI Guardian – GovTech’s platform for safety testing and governance, featuring the Litmus and Sentinel tools. Litmus enables comprehensive safety testing and risk assessment of AI applications, while Sentinel helps mitigate common AI vulnerabilities and implement best-in-class guardrails for government use.
LionGuard 2 – a multilingual moderation classifier trained for Singapore’s context (English/Singlish, Chinese, Malay, partial Tamil) that detects unsafe or biased content in real time.
These initiatives strengthen how public agencies test and secure AI systems before deployment — building trust in the digital services that citizens rely on daily.
GovTech’s efforts complement resources from IMDA and PDPC, such as the Implementation and Self-Assessment Guide for Organisations (ISAGO), which helps translate AI governance principles into practical steps.
The next frontier of AI: Agentic AI and collaboration
Beyond generative AI, the next major wave of innovation is agentic AI — systems capable of pursuing goals and taking action autonomously. As GovTech’s Agentic AI Primer explains, this design makes systems more proactive — capable of performing complex tasks with limited human input. TThis is achieved through combining several components:
a large language model “brain”;
memory to store context;
tools for action (e.g., sending an email or querying a database); and instructions or goals for decision-making.
However, this increased autonomy and capability also translates into greater risk of errors, misuse, and hallucination. To ensure that agentic systems remain safe and aligned to the public good, GovTech developed the Agentic Risk & Capability (ARC) Framework, which helps to identify potential risks across agentic systems with different capabilities, be it searching the internet or executing code, and outlines technical controls that teams can take to mitigate the risk and continue to innovate responsibly.
From experiments to impact
Agentic AI is already being tested across several public-sector domains.
Customer Relationship Management (CRM):
Lois, a data scientist at GovTech, explains how an internal beta team of AI agents is helping officers surface insights faster. “When a director asks, "Can you share with me what's the emerging topics in the CRM space, and what are some of the divisions that need to be brought on board to resolve a particular policy review?" the [users] could simply query this agentic system and get a quick answer before going back to their bosses.”Cybersecurity:
GovTech is also experimenting with agentic AI to automate routine threat-hunting and security testing. Ding Yao, a cybersecurity engineer, shares that: “Much of the work goes into codifying the expertise of our cyber officers so that AI agents can automate some of the more routine tasks … This frees up the capacity of our limited manpower resources so that they can focus on doing higher value work.”
To accelerate development, GovTech collaborates with global AI partners such as Google — ensuring Singapore can leapfrog its capabilities without compromising safety. “In pushing for the adoption of AI,” Ding Yao adds, “we also need to ensure that we do not compromise the safety and security of AI’s beneficiaries.”
A living process of trust and collaboration
Responsible AI is a continuous journey of learning, testing, and improving. By integrating frameworks such as the Model AI Governance for Generative AI with engineering platforms like AI Guardian and the ARC Framework, GovTech is shaping a public sector that adopts AI responsibly — where innovation is matched by integrity.
Curious how Singapore secures AI systems and builds guardrails for the next wave of intelligent agents? Watch the latest episode of GovTech Decoded to hear from GovTechies as they share how they design safe, secure, and human-centric AI.
Explore more stories of innovation and impact on the GovTech Decoded YouTube channel.
Connect with us!

Subscribe to the TechNews email newsletter

