10 Ways to Reduce the Risk of Data Migration During Legacy Replacement

It will never be easy but does data migration always have to be number one on the system replacement risk log?

(Photo credit: Pava.) 

It will happen sooner or later.  With the advantages to be gained from the latest technology solutions, putting off a legacy system replacement is not really an option.  Leading market analyst Novarica reported in its latest P&C Market Navigator report that “more than one-third of property/casualty insurers are replacing or planning to replace policy administration systems in 2016.”

With few exceptions, these system replacement programs will require a data migration work stream.  A Bloor Research whitepaper estimates that “38 percent of DM projects end in failure”—a disheartening statistic for any CIO facing a data migration project.

Why do around a third of these projects fail? For many reasons, but often the main culprits are the poor quality of legacy data; complicated record structures and formats and the source systems are often built in an outdated technology – all in all plenty, to lose sleep over.  Combined with the many uncertainties and costly pitfalls, most vendors are reluctant to provide timescale estimates let alone cost certainty.

My experience from commercial insurance software implementations have led me to a top ten list of must do’s to knock data migration off the number one spot on the system replacement risk log.

Keys to success:

  1. Engage the business from the start. The data is owned by the business and ultimately when the migration team is long gone, the business will be working with the data. They have a vested interest in getting it right, so get them in from the start when shaping the project.  The business will be the most effective testers and validators, they will understand the business logic and how to get the best from the rules and validations that will be key to making sense of the data.  Appointing a senior business leader as sponsor or even as a hands-on lead for the work stream is a great way to prove how seriously you’re taking it.  Ignore finance at your peril; they will be central to defining the acceptance criteria and although not the primary users, finance ultimately holds the keys to the go/no go decision.
  2. Invest time up front to creating the most efficient migration strategy. Planning is crucial, so make sure there is enough time to prepare, plan and analyze before starting in earnest. Your migration strategy is key to your success so ensure you are clear about your objectives.  Assess what data you really need to bring into the new system and don’t migrate everything if you don’t need everything.  Alternatives such as migrate on renewal, time bound date parameters and live and active records only are all possibilities; the key is to limit the scope to what is really required by the business, not just to recreate what they already have just in case they need it later.  If this is a real fear for the business, provide a mechanism for bringing legacy data into the new system on demand perhaps from a data warehouse in the event that it becomes essential, don’t close the door completely.
  3. Make sure your vendor has a robust data migration process and a proven methodology underpinning that process. With so many variables it’s important to follow a set structure.  There should be a strict governance practice supporting the methodology ensuring everyone knows exactly what has been agreed, when and by whom. Additionally, the target solution may be undergoing enhancement so you need to take account of the end state software.   Ensure that the project team follows the methodology and has at least some members who have done this before, it’s not an exercise for rookies.
  4. Use the latest technology to support the migration process. This will give greater insight, control and transparency into the data as it’s migrated.  Ensure the vendor has tools to provide this intelligence allowing you to monitor progress and assess the accuracy of mappings, transformation algorithms, rules and business validations and to amend and refine those routines if necessary.  So often in the past, the migration routine has been a black box with little or no feedback until the procedures were complete.  Some level of automated testing will be invaluable here as the data volumes will be large and you will be running the routines multiple times as part of the testing cycle.
  5. Address legacy data quality early. Detailed data profiling and analysis is essential to validate the quality of data to be migrated, change at source if it’s defective or needs augmenting. Don’t bring substandard data into the new system, it will dilute the value you can get from the data and weaken the quality of analytics that the new system will ultimately provide.  As with many things, there will be an 80/20 rule in that 80 percent of your data will be more routine and straightforward to process, but 20 percent will be complex and will require more analysis and attention. Ensure that the common structures and records are identified early to allow time for the complex.
  6. Rehearse, rehearse then rehearse again. Multiple dry runs with a copy of production data in the run up to the planned cut over date will test your process and allow you to refine your hour by hour run plan.  Analyze the dry run results with the business as if it were the real thing so there are no surprises on the day, and you are prepared for the scenarios that may require intervention.
  7. Don’t settle for average results. Be tenacious and courageous as the migration routines are mapped and configured, don’t give in to manual re-keying and fixing data post go live.  Aim for as close to 100% successful migration as possible. Data augmentation post cut-over will delay the business securing tangible value from the new system and ultimately delay cost benefits from legacy decommissioning.
  8. Plan for an interim state. There is a strong chance that the Program will be implemented in phases to minimize business disruption and mitigate a ‘big bang’ risk which is absolutely logical. As part of this, you will require an interim state architecture covering the period when you have partially implemented the new solution perhaps for a few initial lines of business or maybe one regional office, but you have remaining lines on the legacy system.  Minimize this period where possible, but ensure that the interim state operational processes have been fully worked through with the business, and there are no cracks to fall through.
  9. Create a cross party team.  As well as strong leadership, you will need your vendor to work closely with your business and IT teams, your SI Partner and third party vendors in an environment that looks not to pass the blame, but to share in the solution.  Build a multi-party team and break down the barriers as early as you can.  The going will get tough, and the best chance of success is with the work stream pulling in the same direction rather than looking to pass the buck.
  10. Create a comprehensive, detailed go live plan and run strictly against it. Seems like an obvious one, but an hour by hour run plan developed, refined and tested during the rehearsals give the team the confidence that the lessons learned are being taken on board in readiness for the actual cut over the day.  The key is the level of detail to which you need to go, state the obvious, set every step down and ensure everyone knows there role on the day.

The truth of the matter is, there is no pain free solution to data migration, but there are a number of analgesic steps you can take to ensure it’s not as uncomfortable as it could be—and if you’re lucky, move it down the list on the risk log.

 

John Racher // John Racher is the Product Strategy Director at Xuber, a provider of insurance software solutions that provides products and services to more than 180 brokers and carriers in over 46 countries worldwide. Xuber is part of Xchanging, a CSC Company.

Leave a Comment

(required)