Data wrangling – but the London Market is no spaghetti western
What keeps London Market COOs awake at night? Nick Mair, co-founder and CEO of bootstrapped insurtech start-up Atticus DQPro, explores the good, the bad and the ugly in the challenge of managing legacy system data.
In the 20 years since the historic Lloyd’s insurance market faced its day of reckoning with an asbestos-related solvency crisis, much has changed for the better. The emergence of corporate rather than private capital, increased contributions to the market’s central fund and ever increasing regulation have produced a resilient market that last year managed to sustain losses of £2 billion (US$2.7 billion) whilst maintaining considerable underlying strength.
The market may not have a solvency issue any more, but it does face increasingly complex regulatory requirements that must be met. Regulations such as Solvency II and Lloyd’s own Minimum Standards add layers of compliance and reporting that require insurance companies to maintain and retain reserves of compliant data as well as reserves of capital.
Catastrophic data related business failure – such as a data breach, a major data quality issue, or inaccurate reporting – is arguably a greater risk to an insurance company’s business and reputation than under-capitalisation these days.
Earlier this year, Lloyd’s Corporation Director of Management, Jon Hancock, asserted that more would be done to reduce red tape for carriers, with reporting requirements cut back and a focus on the use of datasets and benchmarks to identify anomalies in classes of business. This fresh thinking is welcome – certainly, a more risk-based approach could help reduce the burden on managing agents, which has increased substantially in recent years.
Legacy system data wranglers
The harsh reality is that most legacy systems simply aren’t fit to support the modern needs of regulatory reporting. Instead a huge amount of effort is expended in “data wrangling” – gathering data from various legacy systems, jamming it together in Excel or a datamart (if you’re more advanced), manually adjusting for quality errors and submitting in time for the deadline to avoid penalties.
Clearly if this resource-heavy process is repeated monthly, quarterly, annually etc, it soon represents a considerable cost to a business. Underpinning all of this is a lack of basic underlying confidence. And when significant fines or increased capital loadings are threatened, confidence should be fundamental to every aspect of reporting.
Three core data concerns
There are are three reasons why much of the specialty insurance market will re-key risk data into their in-house source systems for some time to come:
- The nature of the legacy systems: typically evolved over 10-15 years, legacy systems are proving slow to provide the necessary staging and validation required to support the latest electronic messages and data standards.
- Size of data set: the London Market is complex – compared to personal lines such as motor and home, the datasets required for specialty risks are often three to five times in size and variability. And increased complexity increases the potential for human error at source.
- A distributed market – international business is channelled to London from legacy source systems of variable quality from around the world. And whilst e-placing messages are great, removing the soft evaluation currently performed by a human operator makes it even more essential to have automated, second-line validation.
Data – the good, the bad and the ugly
So why do most specialty/large commercial insurers worry about the state of their data? Because their systems must handle highly complex datasets from multiple sources in a wide array of formats: the good, the bad and the ugly.
New international data standards in US P&C and Lloyd’s markets are all helping to standardize the format for coverholders, but carrier-side controls and checks will be required to minimize third party data risk like this.
Quick off the draw – a single solution
Our approach with our insurtech startup DQPro has been to address all of these issues with one, easy to use solution. We’ve developed new technology that works over old market systems to apply the checks and controls specialty carriers need on their data. And we do this daily, for business side users at scale.
Our largest UK customer is a global carrier which uses DQPro for multiple use cases across multiple business teams; to actively manage the data entry performed by an outsourced provider in India, for checking quality and standard of Lloyd’s market messages, for FX rate consistency across multiple source systems and much more.
Meanwhile, our largest US carrier uses DQPro to actively ensure the standard of data captured and sent from international branches in countries such as Brazil, Singapore and Canada are met. This way, all their checks are deployed and managed in one place by software that actively engages business users, increasing visibility, accountability and ultimately, overall quality.
Data confidence and visibility
This is not the Wild West, and ad-hoc approaches to managing legacy system data and compliance don’t belong in a modern market. The answer? Specialty carriers require basic, underlying confidence and visibility in core data in order to thrive. It’s fundamental to everything. We can’t replace all our legacy systems overnight, but new technology offers the tools to efficiently solve the problem more efficiently and on a global scale, whilst reducing back office cost and operational risk.