By: Philip Hardy and Chris Baker, Partners in Ashurst Risk Advisors
Why speed is no defense when precision fails, and what today’s leaders must ask before trusting the tech.
In a world obsessed with faster, cheaper outputs, AI has made ‘good enough’ look very tempting when it comes to legal and risk advisory outputs. Need an obligation map – “There’s a tool for that!” Want to summarise 400 regulatory clauses – “Just prompt the bot.”
But compliance isn’t a race – it’s a contract with regulators, stakeholders, and the public. And when shortcuts miss the mark, “We used AI” simply won’t get you off the hook. In fact, it might raise the bar for what’s considered reckless disregard.
Speed ≠ Safety – The Case of the Collapsing Proposal
Let’s start with a recent real-life story.
A multinational firm wrestling with niche rules recently invited proposals from several firms. Our bid emphasised expertly curated obligation libraries, legal and risk oversight, and ‘incremental AI assistance’. Another vendor promised a single platform that would “write all obligations, map all controls, and keep them updated automatically”.
During due diligence, however, the other vendor conceded they could offer speed, but not accuracy. They could offer no assurance that the tool’s recommendations were accurate or that it would satisfy a regulator asking the reasonable-steps question. The firm’s compliance leaders pressed harder: would the vendor underwrite the output? The answer was no. The value proposition collapsed, and along with it, the illusion that AI without expert oversight can meet the needs of complex regulated entities and placate their supervisory bodies.
Context ≠ Comprehension: The Case Where Automation Missed Real-World Control
In yet another cautionary tale, a high-risk venue operator initially relied on AI-generated risk controls to satisfy venue compliance rules (i.e., no under-18 patrons). The tool pulled in industry practice and recommended a range of complex measures, but it completely missed a key, simple, manual control: the presence of two full-time security staff who checked patrons on entry. AI simply couldn’t see what wasn’t written down.
This offers a sobering lesson: just because AI can summarise what’s on a page doesn’t mean it understands what happens on the ground.
When AI Belongs in Your Compliance Stack
None of this is a blanket warning against using AI. Used properly, AI is already driving value in risk and compliance, including:
- Scanning policy libraries for inconsistent language
- Flagging emerging risks in real-time from complaints or case data
- Improving data quality at capture
- Drafting baseline documentation for expert review
- Identifying change impacts across jurisdictions and business units
But note the pattern: AI handles volume and repetition; humans handle nuance and insight. The most robust use cases right now treat automation as an accelerant and not a replacement. This is because the line between support and substitution must be drawn carefully and visibly.
Ask This First Before Plugging in Your Next Tool
As regulators pivot from rule-based assessments to ‘reasonable steps’ accountability, the key question is no longer just “Did we comply?” but “Can we prove we understood the risk and chose the right tools to manage it?” If your AI-assisted compliance map can’t explain its logic, show its exclusions, or withstand scrutiny under cross-examination, then you don’t have a time-saver – you’ve got a liability.
So, before you plug in an ‘all-in-one automation’ solution, first ask: Will this tool produce explainable and auditable outcomes? Is there clear human oversight at every high-risk stress point? Can we justify our decision to use this tool, especially when something goes wrong? If the answer to any of these is no, you’re not accelerating your compliance strategy – you’re undermining it.
We all love speed, but in risk, speed without precision is a rounding error waiting to become a headline. Compliance leaders have a duty to make sure that what’s fast is also right and that when it’s not, there’s someone accountable.
In this era of ‘good enough’ AI, being good is simply no longer good enough…Being right is.
Disclaimer: The information provided in this article is for general informational purposes only and should not be construed as legal or professional advice. Businesses and organizations utilizing AI in compliance and risk management should seek expert guidance to ensure the accuracy, transparency, and accountability of their AI tools.