MO® Compliance Chat

The FCA AI LAB Supercharged Academy 2026

Written by Chris Davies | May 12, 2026 2:23:44 PM

In January ’26, Model Office were lucky enough to be selected to participate in the FCA inaugural AI Lab Supercharged Academy. Thirty two AI firms coming together with the excellent support of Centre for Technology and Entrepreneurship (CFTE) for a ten week immersive programme which has contributed to some the most important stages in the evolution of our RegTech business.

It is clear to me that the discussion around AI in financial services is now moving beyond experimentation. The focus is shifting towards implementation, governance, trust, operational deployment and commercial reality. For firms looking at AI and Agentic AI today, that distinction matters.

There is still a lot of noise in the market. Endless commentary around disruption, replacement and transformation. But the reality inside regulated financial services is more practical than that. Firms are trying to understand how AI can remove friction, improve oversight, increase consistency, reduce operational cost and produce better data and management information without increasing regulatory exposure and risks.

That was one of the most valuable aspects of the Academy.

It was not positioned as a theoretical AI course. It was grounded in the realities of regulated markets, institutional buying behaviour, governance and operational deployment. The programme itself reflected this structure week by week, covering positioning, execution feasibility, institutional procurement, regulatory alignment, AI design rationale, AI-native operating models and sustainable commercial models.

One of the clearest takeaways for me is this:

Strong innovation rarely fails because the technology does not work. It fails because it cannot be trusted, approved, procured or embedded into existing operational workflows.

That changes how founders and firms should think about AI deployment.

For us at Model Office, it reinforced that we are not simply building RegTech software. We are building defensibility for regulated firms. The real value is not just automation. It is creating evidence, auditability, governance visibility and operational assurance around increasingly autonomous workflows.

That becomes even more important as Agentic AI starts moving into production environments.

Over the course of the Academy, a consistent theme emerged around where AI should and should not sit within regulated workflows. There was a strong focus on understanding where AI adds value, where human oversight remains critical and how accountability must remain visible throughout the process. This is where I think many firms currently misunderstand Agentic AI.

The opportunity is not simply replacing people with agents. The opportunity is redesigning fragmented operational workflows where repetitive manual activity currently dominates across areas such as:

· Audit reviews and reports

· Client file audits

· Governance reporting

· Risk monitoring

· Compliance document reviews

· Training and competency oversight

· Management information and data driven audit trails

· Regulatory evidence gathering and horizon scanning

These are all highly process-driven environments with repeatable structures, large data volumes and significant manual overhead. When deployed properly, Agentic AI can materially reduce operational friction across these workflows.

But there is an important caveat. Governance is now part of the product itself. This is probably the single biggest shift in my thinking I took away from the Academy.

The market is moving towards embedded governance models where monitoring, auditability, escalation logic and evidence production are built directly into AI-enabled workflows rather than sitting externally as afterthoughts.

Regulators are increasingly interested not just in outputs, but in how outputs are produced. That distinction is critical.

As AI systems become more autonomous, expectations around oversight, ownership, transparency and evidence increase significantly. Firms will need to demonstrate:

· Why a decision was made

· What data informed it

· Where human intervention occurred

· What controls existed

· How risks were monitored

· How outcomes were validated

That creates a major opportunity for firms deploying AI responsibly. Another interesting discussion point throughout the programme was around AI-native operating models.

A real light bulb and validatory moment for me was the Academy mentors view that historically, scaling software businesses often meant scaling headcount, management layers and operational overhead. AI changes that equation.

But now, smaller highly capable teams, founder led for longer, supported by AI systems, can operate with leverage levels that would previously have required significantly larger organisations. This is essentially the Model Office business model, remain virtual and highly efficient. The Academy explored this directly through discussions around AI-native capability models and sustainable AI-era business structures.

I believe this will fundamentally reshape parts of financial services technology over the next three to five years and not because humans will be replaced and disappear, but because repetitive operational activity and efficiency will mean businesses will increasingly become system-led, monitored and exception-driven in line with human stewardship.

The other major learning was commercial rather than technical. Many AI founders remain overly focused on capability rather than adoption. The Academy repeatedly challenged participants to think harder about:

· Why a client would buy rather than build

· Whether products genuinely become embedded into operational workflows

· Whether usage and stickiness exist beyond initial pilots

· How pricing models evolve as AI changes value delivery

· How commercial narratives are articulated in crowded markets

Those are difficult but necessary questions, especially as enterprise firms become more confident experimenting internally with foundation models and AI tooling.

For AI firms, the answer increasingly sits in workflow integration, domain expertise, governance, risk and compliance capability, defensibility, supervision and regulatory intelligence expertise, trust and operational embedding rather than raw model capability alone. AI and RegTech vendors who offer specialised end-to-end software that meets specific challenges and integrates and automates will always offer a valuable solution.

Another encouraging aspect of the programme was seeing regulators, academia and industry engaging constructively together. The FCA, alongside other global regulators such as the Monetary Authority of Singapore, clearly recognise both the opportunity and the responsibility associated with AI deployment in financial services. The direction of travel feels increasingly collaborative rather than adversarial.

That matters because the pace of AI development means regulation cannot operate in isolation from industry reality.

Overall, the Academy forced many of us to step back from day-to-day operational execution and think more critically about long-term durability.

It’s not just about what can AI do, it’s about asking key questions such as:

What should AI do? How should it be governed? How does it become trusted? How does it scale responsibly? How does it become operationally sustainable?

These are much harder questions that need to be addressed with care and due diligence. But they are the questions that will define which AI businesses and regulated firms succeed over the next decade. The technology itself is moving so incredibly quickly.

The firms that will gain the greatest advantage are likely to be those that combine AI capability with governance, operational discipline, integration, data quality and staff and client trust. This is where sustainable value will be created.

Please click the below icon to learn more about MO RegTech today..