Robert Whitton, VP of Business Development at DRC, was recently selected by Chisel AI to be one of fourteen expert contributors to their 2021 Underwriting Priorities eBook. Chisel AI works with commercial insurance companies to help them harness the ever-increasing power of AI and publishes this annual eBook to shine a light on key priorities and recommendations for implementing meaningful digital transformation in the insurance industry. An unabridged version of his article appears below:
Build for immediate needs, but don’t ignore what’s around the corner
The COVID-19 pandemic may have accelerated digitalization in industry, but the challenges of translating the complexities inherent in commercial underwriting into practical technology remain. Underwriters still have difficulty finding intelligently-prepared data and insights which provide them the ability to make better and more informed decisions.
These challenges, in turn, lead to disappointing or failed implementations of solutions that don’t adequately address the unique needs of the insurance industry.
Carriers need to adopt technology solutions that both provide options for underwriters’ most pressing current concerns while also preparing them for a future in which AI and machine learning play an increasingly larger role.
All data must be accurate, meaningful and accessible to AI and machine learning
When assessing technology to benefit underwriting, special attention should be paid to areas that are particularly time- and labor-intensive.
Any solutions under consideration should allow underwriter users to upload and validate very large schedules of risk, immediately return indications of eligibility and pricing to prospective clients, and automate third-party data enrichment, for example.
The benefit of easily-accessible and detailed real-time analytics cannot be overstated from a risk-management perspective, as well. Not only do these capabilities significantly boost the efficiency of an underwriting team, they encourage the use and capture of as much valuable data as possible.
Populate data lakes with meaningful risk data
The latter point is key, because insurance solutions should not only allow for the flexible underwriting and pricing of complex risks, but also store all those risk characteristics in a structured form that can be used to effectively train algorithms.
The adoption, selection, and implementation of enterprise data warehouses lays the foundational infrastructure for true corporate data repositories. The next critical step for insurance carriers is to populate those data lakes with actual meaningful risk data. It is currently very common to see only a fraction of the risk data elements stored in a carrier’s system of record, while data-rich quoting tools quietly go stale in shared folders, emails, Excel spreadsheets, and siloed data stores. This unused data undermines the overall decision-making capacity of the organization.
To remain competitive, enterprise leaders must champion the preservation of all of the historic data elements used to price and underwrite a risk. This empowers actuaries to run meaningful analytics and start feeding AI and Machine Learning applications currently starved for valuable data.
The increasing adoption of algorithms to underwrite and help price/grade risks is inevitable. Carriers unable to store all of their data in an analyzable format will rapidly fall behind in the race to take advantage of new AI/ML technology.
Break free of policy administration system hegemony
Finally, the myth that a company must be dependent on a monolithic policy administration system needs to be dispelled once and for all.
Carriers would be wise to decouple their downstream integrations from any single PAS and tie them all into the enterprise corporate data lake. This process might not be quick or easy, but it is of great importance. The costs, both in terms of time and capital, associated with implementing and maintaining “one PAS to rule them all” can easily become a long-term impediment to building a successful enterprise IT architecture.
Multiple PAS easily coexist provided they all feed the corporate data lake. In the end, this transformation is crucial to the process of building the data architecture to support the levels of automation of underwriting, pricing, and operations that are sure to come.