A co-pilot, not a replacement: How AI will augment human labour in the workforce
28 November 2025
How will AI transform the future of work? Explore how GovTech’s responsible AI tools, safety frameworks, and workforce initiatives ensure AI becomes a trusted co-pilot that enhances productivity, creativity, and public service delivery.

As artificial intelligence (AI) transforms how we live and work, questions about its role in the workplace have never been more urgent. Will machines replace human labour, or will they empower us to work smarter, faster, and more creatively?
Singapore’s approach to digital transformation offers a balanced answer: technology should augment, not replace, human capability. By combining innovation with governance, the public sector can harness AI responsibly — ensuring tools remain secure, trusted, and citizen-centric.
Debunking the “replacement” myth: Why AI can’t think like humans
Let’s begin with the most common concern: Can AI replace human labour?
While AI has made impressive strides, it still cannot reason or truly understand context like humans do. Its limitations become clear when we look closely at how AI works and where it fails.
AI doesn't understand — it matches patterns
AI excels at processing large volumes of data and identifying patterns. It can summarise reports, classify information, and handle routine queries at scale. However, it does not actually comprehend meaning.
When a large language model (LLM) generates a sentence, it predicts what word is statistically most likely to come next — not what is conceptually correct. For example, when completing the phrase “I like to eat a red [BLANK],” the model is likely to choose “apple” because that pattern appears most often in its training data, even though “cabbage” would also make sense.
Limits of machine reasoning
Apple’s 2024 paper on reasoning limits highlights a critical insight: even advanced “reasoning” models eventually break down when faced with complex, multi-step problems. Despite self-reflection mechanisms, LLMs fail to generalise reasoning beyond a certain threshold.
Hallucination risks
Researchers have found AI to “hallucinate” and confidently generate false information that sounds credible. It might cite non-existent research papers, make up statistics, or invent facts because it's optimised to just give you an answer, not to tell you "I don't know."
Over-automation and productivity paradox
Unchecked automation can reduce efficiency when humans must compensate for AI’s limitations. This over-automation often leads workers to spend more time verifying or correcting results, offsetting productivity gains. Research also highlights a productivity paradox where output increases but true efficiency declines.
In short, AI may help us work faster, but without proper oversight, it can also make us work harder — producing what some call “workslop,” where quality control negates time saved.
Responsible AI: Supporting trustworthy and ethical development
For users to confidently rely on AI as a copilot in their work, AI tools must be safe, trusted, and aligned to our ethical values. There have been numerous examples of how AI tools can go wrong, from generating hateful content to being manipulated by jailbreaking attacks, and GovTech has been working hard to safeguard AI tools and components.
Litmus: a comprehensive AI safety testing platform that assesses how resistant LLM applications are to common safety and security attacks.
Sentinel: a “Guardrails-as-a-Service” platform that protects LLM applications by identifying and filtering unsafe content, such as prompt injection attacks or toxic content, from being returned to the user.
LionGuard: a Singapore-contextualised moderation classifier which can identify if a text contains unsafe Singaporean slang or references across all 4 languages, including Singlish.
Responsible AI Playbook: designed to guide developers in safe, trustworthy and ethical development, deployment and monitoring of AI systems.
Together, these initiatives help to make our AI systems safer, more secure, and aligned to the public good, which in turn help to make AI tools more of a trusted copilot.
AI as your co-pilot: A better framework
If effective AI use requires human oversight, what does a productive human–machine partnership look like? The answer lies in using AI as your co-pilot.
When AI acts as a co-pilot, it handles defined, repetitive tasks while humans maintain judgment and control. Much like autopilot in aviation, the system manages routine operations, but the human operator still makes critical decisions.
This co-pilot approach sits between AI assistance (suggestions only) and full automation (independent execution). It enables efficiency without surrendering accountability. The question shifts from “Can AI do this job?” to “How can AI help humans do this job better?”
Automating mundane tasks, freeing up potential
AI is most valuable when it removes low-value work so people can focus on strategy and creativity. Examples include:
Automating data entry, scheduling, or report generation.
Drafting summaries or initial responses that humans refine.
Analysing trends to improve citizen services.
However, these benefits only materialise when AI output is verified and when time saved is reinvested in higher-value work.
Enhancing decision-making with intelligent insights
AI’s greatest strength is processing vast datasets and uncovering patterns that humans might miss.
One real-world example is Johns Hopkins University’s TREWS (Targeted Real-Time Early Warning System), an AI model that detects sepsis — a life-threatening condition — hours earlier than traditional methods. A study showed that TREWS accurately identified 82% of sepsis cases among more than 590,000 patients, significantly improving survival rates.
But crucially, TREWS does not replace doctors. It alerts them to possible risks, allowing clinicians to validate, interpret, and act on AI recommendations within the full patient context. This “human-in-the-loop” design exemplifies responsible AI.
An example is PENSIEVE-AI, an innovative AI tool developed by GovTech Singapore and Singapore General Hospital to detect early signs of dementia in seniors. PENSIEVE-AI uses artificial intelligence to analyse strokes made by people undergoing a simple drawing test, picking up subtle patterns easily overlooked by traditional pen and paper methods. This leads to a high accuracy of estimating the risk of dementia, allowing for earlier interventions and better management of the condition. This collaboration exemplifies how AI can be used to improve healthcare outcomes and provide timely support for individuals at risk of dementia.
Boosting creativity and innovation
AI can serve as a creative partner — generating ideas, synthesising research, or producing drafts that humans refine. Generative tools make it easier to explore more design variations and test new concepts quickly, but true creativity still depends on human judgment for quality and originality.
In creative work, AI acts as a catalyst rather than a creator. It can draft or inspire, but the human touch gives meaning, emotion, and purpose to every idea. Without that, teams risk producing output that looks polished yet lacks depth or authenticity — creativity in form, but not in substance.
Reshaping the workforce: New roles and evolving skills
The rise of AI is transforming how work is organised and the kinds of skills that are in demand. Beyond technical knowledge, success now depends on how well people can collaborate with intelligent systems.
Some of these emerging roles that bridge technical and human expertise include:
AI trainers — who teach models to understand specific organisational contexts.
Prompt engineers — who optimise human-AI interaction for accuracy and relevance.
AI ethicists — who ensure systems align with societal and organisational values.
AI integration specialists — who deploy tools safely within existing systems.
Together, these roles reflect a broader shift in the modern workforce — one where adaptability, digital confidence, and responsible innovation are becoming essential skills for the future.
The imperative of AI literacy and continuous up-skilling
In an era where AI is becoming integral to nearly every job, the ability to understand, work with and adapt to intelligent systems is no longer optional — it’s essential. Organisations and professionals alike must continuously build their knowledge around how AI works, how it’s governed and how it changes workflows. This is why AI literacy and ongoing up-skilling have become foundational: they enable teams to harness AI effectively, avoid missteps, and stay resilient as the nature of work evolves.
GovTech's vision for an AI-ready Singapore workforce
GovTech is accelerating Singapore’s transition toward an AI-enabled public sector. As our digital government capabilities mature, AI now represents the next transformative frontier – one that enhances, not replaces, human judgment and service delivery. At the core of this shift is the Government AI Blueprint. It is an actionable framework that charts our path from today’s digital government to tomorrow’s AI-native government. It sets out how we will systematically embed AI as a foundational capability that amplifies the impact of every public officer, every agency, and every service we deliver.
The Government AI Blueprint outlines three pillars that GovTech will strengthen to drive whole-of-government AI transformation.
Stewardship: Exercising AI leadership by creating the conditions for safe, responsible and impactful AI adoption across the public sector. This includes establishing standards for AI development and use, and growing AI competencies across the workforce.
Equipping: Laying the technical foundations, ensuring AI-ready data, making available common infrastructure and AI capabilities, providing trusted AI governance mechanisms needed for safe, efficient and scalable AI use across Whole-of-Government.
Applying: Developing and executing AI action plans that empowers officers and agencies use AI to improve how Government serves, and how Government works. This ensures AI becomes a value-driven accelerator, transforming operations and unlocking new levels of government service.
Across agencies, AI is being harnessed to improve productivity and service delivery. Public officers are increasingly empowered to automate routine tasks, develop chatbots that support citizen engagement, and redirect their focus toward higher-value work such as policy design, stakeholder collaboration, and frontline service.
To cultivate an AI-ready culture, GovTech invests in continuous learning and innovation. GovTech Innovation Day, for example, brings together public officers and tech professionals to explore digital transformation through prompt engineering challenges, panel discussions, and showcases of prototypes from hackathons and innovation programmes. Complementing this is Lorong AI, a co-working AI hub that helps officers discover AI capabilities, share knowledge, and connect with peers across the ecosystem that includes people from industry and research sectors.
By embedding AI into infrastructure, governance, and training — and fostering a culture of experimentation — GovTech is building a future-ready workforce where human judgement, creativity, and oversight remain central to public service excellence.
Collaborating for a productive and inclusive future
The core message is clear: AI should serve as a co-pilot, not an autopilot.
When guided by strong governance, AI can amplify productivity, creativity, and inclusion — helping people work smarter while safeguarding accountability and trust.
As Singapore continues to shape a Smart Nation powered by technology and empathy, GovTech remains committed to ensuring AI strengthens our workforce and enriches public service delivery — keeping technology in service of people, always.
Connect with us!

Subscribe to the TechNews email newsletter

