
After more than three decades in boardrooms across Asia, one pattern is clear: every major disruption eventually becomes a governance issue. AI is no different — but it is moving faster than anything we’ve seen before.
Most organisations today believe they have AI “covered.” There are policies, ethical guidelines, and technical teams in place. Yet when I ask a simple question — can you clearly explain how an AI-driven decision was made, validated, and approved? — the answer is often unclear. That is where the real risk sits.
AI introduces a fundamentally new challenge. Decisions are no longer linear or fully human-led. They are driven by data patterns, continuously evolving models, and automated logic that operates at scale. A 2025 global survey found that over 60% of organisations cannot fully explain critical AI decisions, especially in high-impact areas like credit scoring, fraud detection, and hiring.
We have already seen the consequences. A global bank deployed an AI fraud detection system that significantly reduced fraud losses. However, it also began flagging legitimate transactions at scale, frustrating customers and triggering regulatory scrutiny. The system worked exactly as designed — but governance had not anticipated its behavioural impact.
This is the shift boards must understand. AI does not eliminate risk. It changes its nature.
Forward-looking organisations are moving beyond policy-based governance toward embedded accountability. This starts with clarity on three fronts:
- Where AI is used across the enterprise
- What decisions it influences or makes
- Where human judgment still applies
Without this clarity, oversight becomes symbolic.
AI risk also cuts across traditional silos. It is not just an IT or compliance issue. It spans:
- operational risk (model failure, drift)
- regulatory risk (non-compliance, explainability gaps)
- reputational risk (unexpected outcomes)
- ethical risk (bias, fairness)
Boards that treat AI as a standalone topic will miss systemic exposure.
Another critical shift is moving from periodic oversight to continuous assurance. AI systems evolve over time. Their outputs change as data changes. Annual reviews or static controls cannot keep pace. Leading organisations are implementing:
- real-time monitoring of model behaviour
- decision traceability frameworks
- escalation triggers for anomalies
Globally, regulators are reinforcing this direction. The EU AI Act, along with emerging frameworks across ASEAN, emphasises explainability, accountability, and human oversight for high-risk systems.
The boards I work with are no longer asking, “How do we control AI?”
They are asking, “How do we design accountability into it?”
That is the real shift.
AI will continue to transform how organisations operate. But governance will determine whether that transformation builds trust — or creates risk.
StraitsTribe partners with boards and leadership teams to design AI governance models that align innovation with accountability, transparency, and real-time oversight.