Are Underwriters Smarter than Predictive Models?

Insurers must acknowledge the need for both sophisticated models and seasoned analysts – neither models nor underwriters are as smart individually as they are together.

In insurance, advanced analytics and predictive modeling have been among the most important business and technology development areas during the last decade. After first proving their value in claims fraud detection and consumer marketing, these capabilities naturally migrated to underwriting, where they have been increasingly responsible for assessing and scoring risks. Expectations have risen to the point that some insurers seem to believe that models alone can produce better results than traditional heuristics-based approaches to underwriting.

There can be no doubt that today’s models have come a long way during their relatively brief history in terms of power and sophistication. But it remains to be seen whether models can truly replace skilled and experienced underwriters. The better questions to ask are:

  • How will advanced models and rules engines change the role of underwriters in the future?
  • In the hands of skilled professionals, how will they enable better decision making and business results?

Historically, underwriters have relied on experience, market knowledge, intuition and oral history more than statistical insights when evaluating risk. But, as new, more powerful tools and technology reached the industry, personal and small commercial carriers began applying analytics tools and models to automate basic underwriting tasks and decisions – most notably risk profiling and scoring models. By the mid-1990s, for example, personal lines underwriters used chains of rules to perform simple “knockout” scoring, mainly on auto policies. Driven by binary “if/then” logic, these first-generation rules and systems might decline to quote a risk based on a single variable. They also automated some basic data gathering, but were otherwise limited because of their reliance on legacy technology and basic data insights.

By the late ’90s and early 2000s, the use of modeling and automated scoring had spread to homeowners’ policies and, to a more limited extent, small commercial lines. The competitive success of a few early adopters made statistical-based analysis more visible to the market. As a result, adoption expanded rapidly. Fast followers refined their approaches, with some companies attempting to integrate actuarial, underwriting and modeling perspectives to achieve proprietary rule methodologies and models. Others turned to external, “black box” models that were enhanced with third-party data and marketed by industry vendors.

As the technology advanced and enterprise systems were modernized, more robust analysis and sophisticated risk profiling techniques, such as pattern matching, were within reach. As such, underwriters began asking more questions of the data:

  • What is the risk level compared to similar accounts?
  • How do we better align price with risk quality in a more granular way?
  • How do we leverage pricing elasticity?
  • What can we learn from better alignment of claims data to premium data?

Stronger rules engines enabled these “deeper dives” into micro-segmentation, with collective combinations of risk variables and more sophisticated rules architectures. Complex rule sets were developed using a combination of statistical and heuristic analyses with feedback loops enabling continual learning and rule refinement. These rules were highly nuanced and reflected the sophisticated interrogation of risk dimensions that the best underwriter experience could represent.

By 2008, predictive modeling had expanded into middle-market commercial and specialty lines, such as directors and officers policies for nonprofits. The transactional models themselves were no longer simply supported by databases, called by the underwriting or policy systems during the quote process. Rather, the models were configured into enterprise rules engines and integrated more seamlessly into the application landscape. These newer solution approaches empowered business analysts, rules architects and modelers to more easily manage the rule sets through business-friendly rules management consoles. The result was much faster speed to market with model and rule changes.

EY_UW_History_Infographic_4

Today, as adoption has reached critical mass, confidence in models has grown to the extent that other underwriting rules are de-emphasized or ignored completely. Insights and perspectives from experienced underwriters are not necessarily incorporated into workflows or decisioning processes. For these reasons, it seems possible – some observers might say inevitable ­– that predictive modeling and statistical analysis might replace the heuristics-based analysis that has long been the standard practice in underwriting.

While the technology innovations and modeling improvements have been impressive, there is real risk for underwriters moving toward 100 percent reliance on models. For example, models may not be able to identify various risk anomalies, especially in commercial. Models operate predictably with static information (e.g., credible and critical mass data inputs primarily from policy and claims systems) and within defined analytical processes. They cannot detect nuances of risk quality or dynamically incorporate new data as an extra dimension of evaluation on a real-time basis. In short, models are still unproven when it comes to assessing the finer dimensions of risk for complex commercial lines.

The Limits of Models

Today, much of the rich data for commercial lines underwriting remains in documents and spreadsheets that have yet to be fed into policy administration systems or harvested to use in analysis or enhance better rules and models. For instance, risk location assessments may need 300 data elements for evaluation and pricing in property underwriting; models cannot tackle the extensive data sets or the relative combinations of these data elements. In these situations, underwriting teams still must apply heuristics, their own fact base and their knowledge of the context of additional parameters.

In an industry that still embraces continual learning, there is an increased realization that models cannot and should not be the final or absolute arbiters in underwriting decisions. Rather, they are best viewed as one component in the broader risk selection and pricing processes.

Further, it’s become clear that models must coexist with a broader ecosystem of underwriting rules, like those relating to product, account handling, information ordering, workflow and documentation. Leading-edge firms and innovators are already focused on ongoing rules architecture development and model enhancements in these areas. The continual addition of third-party data (especially customer and location data) in the underwriting process will similarly increase the depth and applicability of industry and demographic insights. And, as claims systems improve their alignment of claim coding to the underwriting and premium side, additional insights will further enhance the accuracy of both predictive models and underwriting rules.

Likewise, new information requirements are introduced and must be readily included and assessed in a real-time manner. Ultimately, the dynamic changes to a risk’s data footprint require similar fluidity in the rule and model adjustments, as well as the underwriting process itself in a continuous improvement cycle. The difference is that models must go through time-consuming processes of statistical validation whenever new inputs are incorporated, while underwriters can immediately refine their evaluations and ask additional questions once new inputs are received. Put another way, models require extraordinarily high confidence in data and the history of data, while underwriters are better able to negotiate some uncertainty in assessing risks.

Defining Roles

This is not a simple “either/or” proposition. Rather, the ultimate – and likely best – outcome for insurers could be that skilled underwriters are empowered to make better decisions by supplementing model output with their own heuristic insights and knowledge of comparable risks across changing market forces. With better models and rules to take care of the simpler decisions, underwriters can focus time and attention on higher-value analysis and areas where deeper, more iterative analysis pays dividends. The combination of rules, models and human knowledge will certainly yield better results than software alone. The challenge will be in defining the precise roles for models, rules engines and individual underwriters in the end-to-end process of assessing, scoring, pricing and managing risk.

While predictive modeling capabilities are essential to marketplace success, they simply can’t be viewed as smarter than or replacements for skilled and experienced underwriters. Few, if any, underwriting decisions are truly binary. That’s why insurers still need teams of people who know how to balance the nuances of risk quality, emerging exposures, market contexts and competitive strategies as they make critical underwriting decisions. The bottom line is that smart insurers recognize the need for both sophisticated models and seasoned analysts. To put it another way – neither models nor underwriters are as smart individually as they are together.

Gail McGiffin // Gail McGiffin is partner/principal in the Ernst & Young Insurance practice and leads the Underwriting, Product, Policy and Billing Solutions. Most recently, McGiffin was CIO at ProSight Specialty Insurance where she led the creation of a fully outsourced platform. Previously, she was a senior executive/managing partner in Accenture’s Insurance practice, leading its Global Underwriting Transformation Offerings. In this role, Gail was responsible for the design, development and implementation of software solutions for Accenture’s insurance clients. She architected Accenture’s consulting assets for underwriting and product development such as diagnostics, business architectures, operating models, process models, performance scorecards and change management. McGiffin started her career in the P&C industry with 15 years at Chubb and two years at Royal & SunAlliance. Her areas of consulting and industry expertise include commercial lines and specialty underwriting, product innovation, customer segmentation, policy administration, operational transformation, distribution optimization, predictive modeling, business intelligence, data mining, GIS and enterprise location intelligence. McGiffin can be reached at gail.mcgiffin@ey.com.

Comments (2)

  1. Excellent article and forward thinking the onset of cognitive analytics and methods greatly enhance the power of underwriting and referral to humans when and where appropriate. The diagram is fantastic and really paints the path of the future which is quickly becoming reality.

  2. Ability to have a dynamic model which can update itself using real time data feeds (location, weather and ‘event’ information) is a must have in the industry. Analysing premium correlating to claim trends can automatically provide optimum premiums to use. There is still need of smart Underwriters who can use all the information being made available to them!

Leave a Comment

(required)