Job Description
AI Performance Engineer
Posting Start Date:  04/03/2026
Job Category:  Entry-Level
Job Family:  Infocomm Technology & Smart Systems

What the role is

What the role is
At #TeamCPF, you’re not just joining a team; you are embracing a culture of excellence, collaboration, and meaningful impact. You will play a pivotal role in empowering over 4 million members to secure their retirement, healthcare, housing needs and better navigate life’s uncertainties.

We thrive on sharp minds and insightful decisions. Your ability to analyse and think critically isn't just valued; it's essential. Every choice you make contributes to our collective success.

Collaboration is our way of life. We believe in the power of effective partnerships and seamless communications across teams. Together, we amplify each other’s strengths and achieve remarkable results.

Our learning never stops. We encourage your inquisitiveness and courage to embrace new challenges head-on. Your agility, readiness to challenge conventions, embrace of data-driven strategies, dedication to learning and applying new skills fuels our innovation and progress.

At the core of everything we do lies a genuine desire to make a difference. We serve our community and support each other with compassion, empathy, and unwavering dedications. Every action we take is guided by a deep sense of purpose and a commitment to those we serve.

Join us at #TeamCPF! Together, let's redefine possibilities and leave a legacy that echoes for generations.

What you will be working on

The AI Performance Engineer is a new role within the AI Enablement Office, a team dedicated to helping CPF Board—and Government as a whole—get the most out of AI.

As AI adoption accelerates, we’ve found that building AI systems is only half the challenge. The other half is making them work well. Today, most teams know how to build and test traditional software, but AI systems behave differently. They produce non-deterministic outputs, fail in subtle and context-dependent ways, and require fundamentally different approaches to evaluation and improvement. This role exists to close that gap.

You will work hands-on to evaluate, diagnose, and improve GenAI and agentic AI systems, while also contributing to the organisation’s growing capability in this space. Think of this as the AI equivalent of what Site Reliability Engineering did for infrastructure: a disciplined practice focused on making AI systems actually perform in production.

This is a new function—not just at CPF Board, but across the broader ecosystem. The discipline of AI performance engineering is still being defined. You should be comfortable with ambiguity, excited about shaping something new, and ready to figure things out alongside the team.

In this role, you will:

Evaluating and Improving AI Systems (~70% of the role)

  • Design and execute evaluation frameworks for GenAI and agentic AI systems, measuring performance against real-world use cases using techniques such as LLM-as-judge, human evaluation protocols, and automated test suites.
  • Systematically diagnose failure modes in AI systems—understanding why an agent selected the wrong tool, why a retrieval step missed relevant context, or why a prompt produces inconsistent outputs—and implement targeted fixes.
  • Iterate on prompts, tool definitions, agent workflows, context engineering, and orchestration logic to improve system outputs. Frameworks in use include LangGraph and N8N.
  • Build regression testing and benchmarking pipelines to ensure AI systems maintain or improve performance over time, drawing on emerging practices in AI evaluation and observability.
  • Collaborate with product and engineering teams to define what “good” looks like for AI outputs in specific business contexts—translating domain requirements into measurable evaluation criteria.

Building Organisation Capability (~30% of the role)

  • Contribute to playbooks, guides, and reusable templates on AI evaluation, prompt engineering, and performance improvement for IT teams across CPF Board.
  • Support workshops and knowledge-sharing sessions to help other teams adopt best practices for testing and tuning AI systems.
  • Document patterns, anti-patterns, and lessons learned to build institutional knowledge in a rapidly evolving field.

What are we looking for

We value the diverse talents and experiences that each individual brings to the table. While mastery of every requirement may not be necessary, familiarity and expertise in some of the following areas will position you for success within this team.

  • Some software development experience, with the ability to write and debug code effectively. Fresh graduates with strong technical foundations and demonstrated curiosity are welcome.
  • Familiarity with Large Language Model (LLM) APIs and core concepts (tokens, context windows, temperature, tool use). Hands-on experience with frameworks like LangChain, LangGraph, or LlamaIndex is advantageous but not essential.
  • Understanding or strong interest in AI evaluation methods, prompt engineering, and agentic AI patterns (e.g. ReAct, tool calling, multi-step workflows).
  • Strong analytical and problem-solving skills. You should be comfortable reasoning about why an AI system is underperforming and forming hypotheses to test.
  • Excellent written communication skills. Much of this work involves writing prompts, evaluation criteria, documentation, and playbooks, so clarity of expression matters.
  • Intellectual curiosity and a genuine interest in how AI systems work under the hood.
  • Comfort with ambiguity and willingness to help define a new function from the ground up. There is no established playbook for this role, you will help write it.
  • Proactive, self-driven attitude with the ability to work both independently and collaboratively.
  • Experience with AI evaluation and observability platforms (e.g. LangSmith, Braintrust, or similar).
  • Familiarity with workflow orchestration tools such as LangGraph, N8N, or similar.
  • Cloud-native development experience.
  • Desire and aptitude to be full-stack—comfortable spanning from infrastructure to UX when needed.
  • Experience with AI evaluation and observability tooling (e.g. LangSmith, Braintrust, or similar).
  • Exposure to workflow orchestration tools such as N8N or similar low-code automation platforms.
  • Experience with cloud-native development.
  • Interest in broadening scope toward full-stack or platform work over time

The seniority of appointment and actual corporate job title will commensurate with individual work experiences.

Position is on a 2-year full-time contract directly under the payroll of CPF Board with an option to renew, contingent upon confirmation and subject to organisational needs. Additionally, there is potential for emplacement into a permanent position.

What you can expect

What you can expect
Being part of #TeamCPF means embarking on a challenging and rewarding career in a progressive workplace that values productivity and growth. Here’s what awaits you:

  • Opportunities to engage in a mix of formal and informal training, keeping your skills sharp in our ever-evolving technological landscape. 
  • Promotion opportunities based on your capability and on-the-job performance. 
  • A vibrant community of like-minded and friendly colleagues, where collaboration and creativity thrive. 
  • A hybrid work model that offers flexibility for remote work, subject to exigencies of service. 
  • Flexible dress code that empowers you to choose your appropriate outfit for the day. 
  • A comprehensive rewards package that includes annual leave, pro-family leave, medical and dental benefits, and access to recreational activities.