(Vast field of sweet peas in North America. Photo credit: Darren Kirby.)
In 1906, Italian economist Vilfredo Pareto observed that 20% of the pea pods in his garden contained 80% of the peas, a concept that is still valid today and is known as the Pareto Principle, or the 80/20 rule. In business, the 80/20 rule is often used to explain why 80% of sales come from 20% of customers. For insurers who are contemplating de-commissioning their legacy systems and upgrading to a more flexible, configurable system that will enable them to quickly respond to customer needs and provide greater insights into their business, it’s important to note that a similar 80/20 rule applies.
What do I mean? We’ve all heard the nightmare tales about data migrations gone wrong. A weekend cut-over turns into weeks of system down time. Hard-coded migration tools that worked fine when tested on smaller data sets couldn’t handle the large volume of data mapping required under a real life migration. And mapping the 20% of non-standard, more complex data wasn’t anticipated, causing the project to run over budget.
With data migration, the general rule of thumb is that 80% of your data will map across automatically to the new system with little need for human intervention. This routine data is typically composed of databases of policy, claims, premiums, and possibly billing, ledger items, cash and technical transactions. But what happens to the 20% of non-standard data which for some reason doesn’t comply with the validation rules you’ve established for your new system? Maybe the codes weren’t understood, or some validation says this class of business can only go with this geography, or a long time ago, in someone’s forgotten memory, a field had to be used in a way that it wasn’t intended.
Unfortunately, there is no way of avoiding that remaining 20% of difficult data when migrating over to the new system. But there are different approaches. At this point, most migration methodologies call for a traditional waterfall model where you have to specify, pass to a coder, go through tests and come out the other side and run it again and again. This is where budgets and timelines can run away with themselves.
Another approach is to load the legacy data on one side, the new data on the other, and insert a user-friendly migration utility in the middle. This allows you to visually match data on the source side with data on the target side and do the complex matching. This very visual view of the relationships makes it easier to spot things that aren’t right, easier to correct them when you do spot them, and easier to go back and tweak or modify those relationships as you start to understand where the mismatches are coming from.
I like to say that this approach opens up the “dark art” of what was previously buried within programs in technical programming language. It puts the ability to fix that remaining 20% in the hands of the technical data analyst – someone who understands both the business and the business data – and without going through the exercise of hard-coding one-off solutions. Once acquired, a good data migration utility can be reused over and over – when you acquire a new book of business, after an acquisition, during periodic upgrades…any time!
The bottom line: When it comes to migrating from a legacy system to a modern system, there is no getting around the 80/20 rule. Roughly 20% of your data “pea pods” will contain peas that cause all sorts of migration problems. A good migration utility, however, can quickly make those peas sweet and help the whole data transition go down smoothly and efficiently.