(Image credit: Adobe Stock.)
When investing in predictive analytics initiatives, an insurance company’s success is typically defined by two key factors: the precision of their strategy and the amount of actionable data with which they can work. Insurers can craft an excellent strategy but may not have the appropriate data to support it.
Access to actionable data isn’t an insurance-specific challenge, but a pervasive issue for all industries. According to a survey from Interana, 70 percent of companies struggle to find actionable insights in their data and 73 percent reported that the slow speed with which insights are produced is a major pain point. In the insurance space specifically, the scope of a predictive analytics strategy will always be tethered to the amount of data an insurer can access.
Sometimes, in-house data alone is enough to support the insurer’s business strategy. If an insurer’s main goal for analytics is to identify and shed poor risk accounts from their book of business, while retaining their normal risk appetite, then it is possible that the company’s own data is sufficient, assuming a large enough sample size. Few companies, however, are interested in shrinking their top-line revenue by double-digit percentages. Most want to improve the overall risk quality of their book by aligning price-to-risk and replacing their riskiest policies with less risk. Additionally, any organization whose strategy is to branch out into unknown classes and territories must incorporate third-party data that expands their view of the market beyond their own book.
Different types of third-party data
Third-party data comes in many forms, including public sources like Census, Bureau of Labor Statistics, S.A.F.E.R, NOAA and others. New sources of primary data are exploding as a result of nascent technologies like drones/satellite imagery and IoT devices, and the rise of e-commerce businesses that are information-rich from published reviews and other information. These inputs could be increasingly important for insurers to offset a possible decline in in-house datasets. This is a result of insurers asking potential policyholders fewer questions during the quoting process to remove customer friction and streamline customer experiences. Online applications are only a small percentage in commercial lines today, but are poised to grow.
One way to bolster the predictive strength of a model is leveraging consortiums, or pools of third-party data. Consortiums can contain transactional data of millions of policy, billing, claims, audit/inspection information that insurers can use to balance out their lack of knowledge in areas they wish to grow. How much consortium data that’s required depends entirely on the business objective. For instance, a large carrier is less likely to believe they require additional data other than their own set, but many are missing data in key segments. Conversely, it tends to be the small to mid-sized carriers who are most limited across the board by their lack of transactional data.
How Data Can Fuel Predictive Analytics Strategy
There are numerous instances where a predictive analytics strategy requires external data to deliver the optimal results. This manifests itself in the nature of risk selection and isn’t limited to any single line of property/casualty insurance. As the market becomes increasingly segmented and insurers carve out their own niches to drive profitability, it’s becoming more crucial than ever for an insurer to know what they aren’t writing as much as what they are.
Commercial Auto: If an insurer with expertise in writing tractor trailers in Montana builds a model with the goal of diversifying the vehicle types its writes within the same state, then it would be difficult to achieve with only its own data set and non-transactional third-party data. By adding consortium data that consists of policy and claims information of other vehicle classes within the state, the predictive model could better identify the ideal risks in each vehicle type. Despite the high severity of the commercial auto market that often keeps insurers from aggressively expanding, there are still good pockets of risk to be found if an insurer has the data to look at a granular level. This practice is what historically made Progressive the market leader in personal auto in the 1990’s. Progressive wrote policies in a subprime market no other insurer would touch.
Workers’ Compensation: Strategies that can be inhibited by a lack of third-party data in workers’ comp often include writing the same classes in new states, new classes in the same state, or writing new classes in new states. The issue with using only an insurer’s in-house data is that the data is biased from years of honing in on risks that fit an insurer’s appetite; it won’t generalize to new areas. Some insurers will make the mistake of believing their own data from years of expertise in writing construction in Wyoming will provide the necessary predictive insights for the same class code in Tennessee, but in fact, it will often do more harm than good. Third-party data helps protect insurers from their own bias when they have carved out an expertise in writing one or more classes of business. An insurer with immense expertise in a few areas risks overfitting the model with a disproportionate amount of good risk. In other words, they have refined their model so much for one specific type of business over time that the model becomes too selective, identifying good risks as worse than they are.
The ability to understand, maintain, and act on the influx of the third-party data has established itself as the battleground that will drive industry competition for years to come. Predictive analytics has both helped to create and solve the demands of the highly competitive insurance market, giving insurers the ability to grow their business with unprecedented speed and accuracy. The success of one analytics initiative versus another hinges on the execution of the predictive analytics strategy and access to the right foundational data.