BankingNewsAI Daily Brief · Wednesday, April 22, 2026
Banking AI
Financial institutions & fintech technology
UK FCA expands “live AI testing” with Barclays, UBS, Lloyds/Scottish Widows, Experian and others
The UK FCA has named an eight-firm second cohort for its AI Live Testing program, explicitly moving beyond sandbox talk into supervised, in-production-style testing. Participants include Barclays, UBS, Lloyds Banking Group (via Scottish Widows), and Experian, signalling the regulator is formalising expectations for controls, monitoring, and evidence as AI use cases go live.
Action
Treat FCA Live Testing as a de facto playbook: align your model/agent governance, validation evidence, and monitoring metrics to what the FCA will expect in supervised tests. If you operate in the UK (or sell to UK banks), pressure-test your highest-risk AI use cases now against FCA-style assurance requirements before they become exam questions.
Revolut trained a proprietary decisioning model on 40B transactions instead of buying frontier LLM access
Revolut claims it stopped paying for third-party frontier model access and built an internal model (“PRAGMA”) trained on 40 billion transactions and behavioral events from 25 million customers. The positioning is notable: it’s not a chat assistant, it’s a decision engine for risk/operations (who to onboard, what to flag, what to approve) using proprietary bank-grade data.
Action
Prioritize proprietary-data advantage over model-brand advantage: stand up an internal decisioning layer that can be audited, versioned, and stress-tested, even if you still use external LLMs for language tasks. Reassess your AI cost structure and vendor concentration risk—core credit/fraud decisions are drifting toward bank-owned models trained on first-party data.
Piraeus Bank creates an enterprise AI hub with Accenture + Anthropic to centralize delivery and controls
Piraeus Bank is shifting from scattered AI deployments to an enterprise AI hub built with Accenture and Anthropic. The explicit move is organizational and operational: a unified capability to standardize tooling, governance, and rollout across the bank, rather than one-off use cases managed by individual teams.
Action
Centralize “agentic” build-and-run the same way you centralized cloud: create a bank-owned platform team that enforces identity, authorization, model risk, and monitoring across all AI use. Use a hub to shorten time-to-production while tightening control—otherwise every business line will reinvent risk management for agents.
General AI
Large language models & AI infrastructure
OpenAI is industrializing Codex for enterprises via GSIs (systems integrators), not just direct sales
OpenAI launched “Codex Labs” and announced partnerships with major global systems integrators to deploy Codex across thousands of engineering organizations. This is a go-to-market shift: Codex is being packaged as an enterprise rollout motion (implementation, change management, governance) rather than a developer tool teams adopt organically.
Action
Assume AI coding is moving into large-scale, governed deployments: define SDLC guardrails (secure code policies, secrets handling, audit trails, model/tool approvals) before integrators bring “turnkey” Codex programs into your environment. Benchmark your software productivity and control metrics now so you can quantify gains and spot new risk quickly.
OpenAI upgraded ChatGPT image generation (Images 2.0) to reliably render text—raising brand, fraud, and document risks
TechCrunch reports OpenAI’s new Images 2.0 model is notably better at generating legible text inside images, a long-standing weakness of image models. That seemingly small capability jump makes it easier to create convincing “visual documents” (signage, IDs, statements, screenshots) at scale.
Action
Harden your fraud controls for synthetic documents and screenshots: update your document verification and claims intake to assume high-quality AI-generated text-in-image is now commodity. Run red-team tests against your KYC, chargeback, and dispute workflows using Images 2.0-style artifacts to identify brittle checks.