BankingNewsAI Daily Brief  ·  Tuesday, March 24, 2026

RBC embeds AI in core workflows and manages it as a P&L line.

🏦 3 Banking AI🤖 3 General AI

Banking AI

Financial institutions & fintech technology

3 stories
pymnts.com01

RBC puts AI into core workflows and is now managing it like a P&L line item ($1B target by 2027)

RBC says it’s embedding AI across developer workflows, capital markets infrastructure, and enterprise decision-making, with a stated goal of up to $1B in AI-generated enterprise value by 2027. The notable shift is governance: this reads less like experimentation and more like a bank committing to measured value capture (revenue lift + productivity) at scale.

Action

Set an explicit enterprise-value target for AI (with named use cases and owners) and tie it to quarterly operating metrics. Re-baseline your 2026–2027 cost and capacity plans assuming AI drives measurable throughput in tech, ops, and markets—then fund the controls (model risk, monitoring, auditability) required to run it safely.

Read article →
finextra.com02

MAS publishes an AI risk toolkit with real bank case studies—effectively a playbook for supervisory-ready AI controls

Singapore’s MAS launched an AI risk management toolkit for financial institutions, including case studies from DBS and others. This moves beyond principle statements into concrete practices supervisors can point to (controls, testing approaches, and operational lessons).

Action

Map your AI control framework to MAS’s toolkit structure and use it as evidence of “industry-standard” practice for internal audit/regulators. Package your highest-risk models (credit, fraud, AML, collections) into a standardized testing and documentation format so you can scale approvals instead of re-litigating governance per use case.

Read article →
fstech.co.uk03

Starling ships an agentic AI assistant inside its banking app—raising the bar for ‘action-taking’ customer service

Starling Bank launched what it calls the UK’s first agentic AI financial assistant in-app, positioned to deliver guidance, insights, and help customers manage day-to-day finances. The key change vs. “chat” features is agentic behavior: customers will expect the assistant to do things (not just answer questions), which forces stronger guardrails and handoffs.

Action

Define where your customer-facing AI is allowed to take action (and where it must escalate), then instrument it end-to-end (logs, decision traces, outcome monitoring). Prioritize 2–3 high-volume journeys (fees disputes, card controls, payment issues, budgeting) where an agent can cut cost-to-serve without creating new conduct risk.

Read article →

General AI

Large language models & AI infrastructure

3 stories
pymnts.com01

OpenAI is reportedly weeks from putting ads into ChatGPT—expect procurement, data-use, and brand controls to get harder overnight

Reuters reports OpenAI is preparing to show ads to U.S. users on ChatGPT’s free and “Go” tiers and has hired a senior ad executive from Meta. That signals a monetization shift that can change product incentives (engagement/targeting) and tighten constraints for enterprise risk teams that already struggle with “shadow ChatGPT” use.

Action

Update your genAI usage policy assuming consumer ChatGPT may become ad-supported and potentially more data-sensitive; steer staff to enterprise-approved endpoints with explicit data-handling terms. Require vendor attestations on ad targeting, prompt/data retention, and separation between ad systems and enterprise traffic before renewing licenses.

Read article →
wiz.io02

Wiz launches AI-APP to secure AI applications as systems (models + agents + tools)—not just prompts

Wiz launched an AI Application Protection Platform (AI-APP) aimed at discovering and securing AI apps across code, cloud, and runtime, explicitly treating AI risk as emerging from interconnected components (models, agents, tools, data, infra). This reflects the security market acknowledging that “LLM firewall” is insufficient once agents can call tools and touch data.

Action

Inventory every production and “near-prod” AI app as an application stack (model + tools + data + runtime) and assign an owner like any Tier-1 system. Expand threat modeling to include tool-call abuse, data exfil paths, and agent permissions, and align IAM/Zero Trust controls to agent identities—not just human users.

Read article →
pymnts.com03

Capital One’s Databolt adds capabilities to tokenize unstructured data for AI use—signal that ‘AI-ready’ privacy engineering is productizing fast

Capital One Software is enhancing its vaultless tokenization product (Databolt) to help enterprises use unstructured data for AI while protecting sensitive information. This is a concrete move toward making “use the data safely” an off-the-shelf capability rather than a bespoke privacy project.

Action

Accelerate unstructured-data AI programs (calls, emails, docs) by standardizing tokenization/redaction as a shared service with clear patterns for re-identification controls. Use tokenization plus policy enforcement to unlock higher-value use cases (complaints, conduct, collections, KYC document workflows) without expanding data exposure.

Read article →

Get this in your inbox every morning

Free · No spam · Unsubscribe anytime

Subscribe free →