While Firms Perfect their AI Safety, their Competitors Are Winning
And other uncomfortable truths about the direction of AI in 2025
Summary
Bloomberg research proves that RAG-based AI systems are more dangerous in financial contexts than standard LLMs, upending industry safety assumptions.
Institutional trader AI adoption skyrocketed from 25% to 61% in three years while RBC commits to generating up to C$1 billion in AI revenue by 2027.
AI-powered surveillance systems have become the new compliance standard as traditional rule-based systems cannot detect AI-enabled market manipulation.
Regulators are shifting from guidance to enforcement with the EU AI Act's August 2025 deadline and SEC's elevated AI examination focus.
Organizations have 12-18 months to deploy workable AI frameworks at scale before competitors and regulators leave them behind.
The AI Safety Paradox: Why "Safer" Systems Are Making Finance More Dangerous
There's a delicious irony unfolding in financial technology that would make even the most seasoned risk manager chuckle nervously. Just as 61% of institutional traders have decided AI is their future¹, Bloomberg's researchers have discovered that the "safer" AI systems everyone's been rushing to deploy are actually more dangerous than the supposedly riskier alternatives².
This reveals a fundamental misunderstanding of how AI behaves in financial contexts, not just another case of unintended consequences. And it's happening right as the industry commits billions to AI infrastructure, regulators sharpen their examination procedures, and competitive pressures mount to deploy these systems at scale.
If you're wondering whether your AI strategy needs a rethink, the answer is probably yes. Here's why.
The RAG Problem Nobody Saw Coming
Bloomberg's research team has just blown up one of the foundational assumptions of financial AI safety. Retrieval-Augmented Generation (RAG) systems—the approach that combines large language models with external data retrieval—were supposed to be the responsible choice. More grounded, more factual, less prone to hallucination.
Turns out, that's wrong. At least in finance.
In a study presented at NAACL 2025, Bloomberg's researchers found that RAG systems produce more unsafe responses in financial contexts than standard LLMs². The very mechanism that was supposed to make AI safer—grounding responses in retrieved data—actually amplifies risks specific to our industry.
The problem lies in what Bloomberg calls "domain-specific risks." While RAG might reduce general hallucinations, it can inadvertently facilitate insider trading, market manipulation, or biased investment advice by retrieving and synthesizing information in ways that violate financial regulations. The system's confidence in its grounded responses makes these violations more convincing and harder to detect.
For anyone who's spent the last two years building AI governance frameworks around RAG-based systems, this is more than inconvenient—it's a fundamental challenge to current deployment strategies. The safety assumptions underlying most financial AI implementations may be backwards.
What this means for your Monday morning: If you're using RAG-based systems for client-facing applications, investment advice, or market analysis, consider specialized Agentic “financial AI risk” assessment frameworks. The general-purpose AI safety tools your vendors are selling likely won't cut it.
The Institutional Stampede
While safety experts debate frameworks, institutional traders have already made their choice. JPMorgan's annual e-Trading survey shows AI adoption jumping from 25% in 2022 to 61% in 2025¹—a 144% increase that represents the fastest technology adoption in modern trading history.
The pace suggests a stampede rather than gradual adoption. And like most stampedes, it's being driven by fear as much as opportunity. Specifically, the fear of being left behind.
Electronic trading itself is expanding rapidly—FX markets moved from 65% to 73% electronic execution in just one year³. Combined with AI adoption, we're seeing a compound transformation that's reshaping competitive dynamics faster than most institutions can adapt their risk frameworks.
The traders driving this adoption aren't waiting for perfect safety solutions. They're responding to market pressures, client demands, and the reality that their competitors are already deploying these systems. When 41% of traders expect market volatility to be their biggest challenge in 2025¹, AI-powered solutions become less optional and more essential.
The uncomfortable truth: Your competitors are probably further along than you think. Success will come from deploying workable AI frameworks quickly rather than perfecting them slowly.
Show Me the Money: RBC's Billion-Dollar Bet
While others debate, Royal Bank of Canada is betting big. They've established dedicated AI teams across New York, Toronto, and London, with a clear mandate: generate C$700 million to C$1 billion in AI-driven revenue by 2027⁴.
RBC treats AI as a fundamental business strategy, not just another tech project. RBC's Aiden platform, which uses deep reinforcement learning for trading execution, exemplifies the shift from experimental AI to production-grade systems that directly impact the bottom line⁵.
The organizational changes are equally telling. Appointing Lindsay Patrick as Chief Strategy and Innovation Officer and Bobby Grubert as head of AI signals that AI strategy now reports directly to the C-suite, not buried in the technology organization⁴.
RBC's approach contrasts sharply with the industry trend toward vendor solutions. While 75% of UK financial firms now use AI, with third-party implementations rising from 17% to 33%⁶, RBC is betting on proprietary development. Their rationale: the competitive advantages of AI are too important to outsource.
Strategic implication: The build-versus-buy decision for AI capabilities may determine market positioning for the next decade. Vendor solutions provide faster deployment but limited differentiation. Proprietary systems offer competitive advantage but require significant investment and expertise.
The Surveillance Arms Race
Here's where things get weird as well as interesting. As AI enables more sophisticated market manipulation, only AI-powered surveillance can keep up. Trading Technologies' recognition as Trade Surveillance Product of the Year⁷ marks a pivotal moment—the industry's acknowledgment that traditional rule-based surveillance is obsolete.
This creates a fascinating dynamic. The same technology that enables new forms of market abuse is the only effective defence against it. Organizations using conventional surveillance systems are fighting tomorrow's battles with yesterday's weapons.
The regulatory implications are significant. Surveillance systems that can't detect AI-enabled manipulation leave institutions exposed to enforcement actions for failing to maintain adequate oversight. The technology shift serves regulatory compliance as much as efficiency.
Risk management reality: If your surveillance systems can't detect AI-driven market manipulation, you're not just missing opportunities—you're creating regulatory liability. You need to upgrade - the only question is how quickly you can do it without disrupting existing operations.
The Sovereignty Shuffle
NVIDIA's European "sovereign AI" initiative adds another layer of complexity⁸. Twenty AI factories across Europe, including five gigafactory-scale operations, address data sovereignty concerns while providing access to cutting-edge infrastructure.
The implications extend beyond regulatory compliance to competitive positioning. Dutch neobank bunq is already using NVIDIA-accelerated systems for fraud detection, while PayPal achieved 70% cost reduction in data processing⁹. The early movers in regional AI infrastructure are establishing advantages that will be difficult to replicate.
The partnership with French AI company Mistral, involving 18,000 NVIDIA Grace Blackwell systems, creates a European alternative to US-dominated AI infrastructure⁸. For global institutions, this fragmentation creates both opportunities and challenges.
Strategic consideration: The choice between global centralized AI systems and regional distributed infrastructure affects everything from regulatory compliance to competitive positioning. Early decisions about AI infrastructure architecture will have lasting implications.
Regulatory Reality Check
The regulatory landscape is shifting from guidance to enforcement. The US Treasury's comprehensive AI report, analyzing 103 industry comments, establishes clear expectations for governance, risk management, and third-party oversight¹⁰. The CFTC's approach requires AI systems to comply with existing regulations—no special treatment for innovative technology¹¹.
The EU AI Act's August 2025 implementation deadline creates immediate compliance pressure¹². The SEC's elevation of AI to a prominent focus in 2025 examinations signals enhanced scrutiny¹³. The message is clear: the experimental phase is over.
Compliance reality: The regulatory convergence across jurisdictions means comprehensive AI governance is becoming a global requirement, not a regional consideration. Organizations have roughly 12-18 months to establish compliant frameworks before examination cycles intensify.
What This Means for Your Organization
The convergence of these trends creates a narrow window for strategic positioning. Organizations that move too slowly risk competitive disadvantage. Those that move too quickly risk regulatory violations or operational failures.
The successful approach requires three simultaneous capabilities:
Rapid deployment of proven AI applications where competitive advantage is clear and risks are manageable. Trading execution, fraud detection, and client service automation fall into this category.
Comprehensive risk frameworks that address financial services-specific AI risks, not just general AI safety. Bloomberg's research shows that domain expertise is crucial for effective risk management.
Organizational transformation that positions AI as a business strategy, not a technology initiative. RBC's structure provides a template for integrating AI into core business operations.
The next 18 months will separate the winners from the also-rans. The winners will deploy good-enough AI strategies at scale while their competitors are still perfecting theirs on paper.
The AI revolution in finance isn't coming. It's here. The question is whether your firm is ready for it.
References
Liquidity and AI are top priorities in JPMorgan's annual e-trading survey - oneZero
Bloomberg's Responsible AI Research: Mitigating Risky RAGs & GenAI in Finance - Bloomberg LP
Industry in flux as electronic trading and risk control comes to the fore - Risk.net
Canadian lender RBC sets up new AI team for capital markets unit - Reuters
RBC's Aiden VWAP: A new era of AI trading in Europe - The TRADE
Artificial intelligence in UK financial services - 2024 - Bank of England
AI expert warns of algo-based market manipulation - Risk.net
Nvidia's pitch for sovereign AI resonates with EU leaders - Reuters
European Financial Services Industry Goes All In on AI to Support Smarter Investments - NVIDIA Blog
Treasury Releases Report on the Uses, Opportunities, and Risks of Artificial Intelligence in Financial Services - U.S. Department of the Treasury
AI Act | Shaping Europe's digital future - European Commission
SEC Will Prioritize AI, Cybersecurity, and Crypto in its 2025 Examination Priorities - White & Case LLP