by Ulrik Franke (RISE SICS)
Automated decision-making has the potential to increase both productivity and competitiveness as well as compensate for well-known human biases and cognitive flaws [1]. But today’s powerful machine-learning based technical solutions also bring about problems of their own – not least in terms of being uncomfortably black-box like. A new research project at RISE Research Institutes of Sweden, in collaboration with KTH Royal Institute of Technology, has recently been set up to study transparency in the insurance industry, a sector that is poised to undergo technological disruption.
At the Danish Insurance 2018 conference in Copenhagen in February, an IBM partner in the exhibition area marketed Watson with the tagline “Insurance company employees are working with Watson to assess insurance claims 25% faster”. The fact that such efforts are underway is no surprise to the informatics or mathematics communities. Yet, it serves as an excellent illustration of both the promise and the perils that the insurance industry faces in the digital world.
Start with the promise. Insurance still largely relies on manual labor throughout its value-chain. Though digitally supported, assessing risk, setting premiums, and adjusting claims most often involves human decision-making. With smarter tools, whether based on rules or machine-learning, more work could certainly be automated, cutting costs for companies and premiums for customers.
However, the road to digital insurance can be rocky. In the past few years, there have been abundant reports of flawed machine-learning applications, e.g., the infamous misclassification of people as monkeys in image-recognition systems, signs of racial bias in systems predicting the probability of criminal recidivism, and sexism in natural language processing. How can an insurance company that starts employing machine-learning at larger scale be sure that risk is properly assessed, that premiums levels are sustainable, and that customers are fairly treated when claims are adjusted?
Such questions prompted Läns-försäkringars forskningsfond, the research arm of a major Swedish insurer, to issue a call for research proposals related to digitalization and insurance. The resulting project, Transparent algorithms in insurance [L1], started in October 2018 and aims to explore issues of algorithmic transparency, broadly construed, in the insurance industry. During the project’s two-year stint, several research questions will be studied, some technical, some more of a social science nature.
One important part of the project will be to study conditions for algorithmic transparency in insurance. Many techniques for transparency in machine-learning systems exist, but they are mostly designed to fit technologies such as deep neural networks or support vector machines. However, meaningful transparency also needs to account for industry, e.g., insurance, and stakeholders. In other words, meaningful algorithmic transparency probably looks different to a CEO, to an actuary, to a software engineer, and to a consumer looking for a car insurance.
Another important strand is ethics. Ethicists have argued that the implementation of algorithms entails ethical judgements and trade-offs, though not always explicit. The project will engage with ongoing software projects with the funder and work with development teams to make explicit the ethically relevant choices made. A promising technique is the paradigm of value-sensitive design [2], which has been successfully employed in other areas.
A third project cornerstone will be to explore the consequences of increased transparency in insurance. Some are expected to be decidedly positive, ranging from better requirements engineering in software projects to better customer value and even growing market shares. Indeed, insurance startups and would-be disruptors such as Lemonade in the US and Hedvig in Sweden use transparency, in various forms, as selling points.
However, the insurance industry is particularly interesting because there is also a significant potential downside to transparency. Insurance companies need to design their products so that customers’ incentives don’t change for the worse, e.g., a car owner driving recklessly because the insurance coverage will take care of the consequences. Mechanisms such as deductibles are designed precisely to prevent this moral hazard. Similarly, the fact that the insured typically knows more than the insurer, e.g., experiencing an ache that a physician cannot detect, might lead to more risk among insureds than expected. Again, mechanisms such as indemnity limits are designed to prevent such adverse selection. While these conceptual problems inherent to insurance have been known for a long time, an increased dependence on algorithms paired with greater transparency might invite insureds to novel forms of undesirable strategic behavior, which ultimately leads to higher premiums for customers. For example, it is probably self-defeating to be transparent about algorithms designed to detect insurance fraud. Thus, the project will investigate needs and techniques for selective transparency.
The project is still in its infancy, but in two years the research team will have taken first steps in designing the modern software quality assurance processes that will enable Länsförsäkringar and the rest of the insurance industry to make more informed decisions about how to best make use of the advances in machine-learning. As “insurers face unprecedented competitive pressure owing to technological change” [3], it will certainly be an interesting road ahead.
Link:
[L1] https://www.ri.se/en/what-we-do/projects/transparent-algorithms-insurance
References:
[1] A. Tversky, D. Kahneman: “Judgment under uncertainty: Heuristics and biases”, Science 185.4157, 1974, 1124–1131. DOI: 10.1126/science.185.4157.1124
[2] B. Friedman, et al.: “Value sensitive design and information systems”, Early engagement and new technologies: Opening up the laboratory. Springer, Dordrecht, 2013. 55–95. DOI: 10.1007/978-94-007-7844-3_4
[3] “The future of insurance: Counsel of protection”, The Economist, March 11, 2017, 67–68.
Please contact:
Ulrik Franke
RISE ICT/SICS, Sweden
+46 72 549 92 64