Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Risks Special Issue on “Granular Models and Machine Learning Models”

Risks Special Issue on “Granular Models and Machine Learning Models” risks Editorial Risks Special Issue on “Granular Models and Machine Learning Models” Greg Taylor School of Risk and Actuarial Studies, University of New South Wales, Kensington, NSW 2052, Australia; greg_taylor60@hotmail.com Received: 8 December 2019; Accepted: 13 December 2019; Published: 30 December 2019 It is probably fair to date loss reserving by means of claim modelling from the late 1960s. For much of the 50 years since then, the models have either remained algebraically simple, or have incrementally proceeded to greater algebraic complexity. Much of the limitation on model sophistication has derived from computing limitations. As computing power has increased, so has model complexity and sophistication. At some point on this journey, the modelling of the claim process of individual claims in detail (granular modelling (GM)) became feasible. More recently, machine learning (ML) has increasingly found its way into the literature. Again, this may (though not necessarily will) target individual claims. The two approaches stand in stark contrast with each other in at least one respect. In my own contribution to the present volume, I refer to them as the Watchmaker (GM) and the Oracle (ML), the one being concerned with ever more detailed and minute modelling, and the other with incisive generalizations about data on the basis of reasoning that may be obscure or even impenetrable. The appearance of two (relatively) new approaches to the estimation of individual claim loss reserves immediately creates a tension between them, with natural questions about their relative performances. But the issue is greater than this. There are also questions about the performance of new versus old models. I have no doubt that some of these new approaches will prove useful in future, and quite possibly dominate all others. For the present, however, their status is, in my view, unproven. The research record contains a number of papers in this field, but some of them consist of an application to a single dataset with little in the way of general conclusions or indication of the extent to which the results could be extrapolated to other datasets. The consequence is a fragmented research record, leaving open questions about the general applicability of GM and ML. Some of the (to my mind) landmark questions requiring answer are the following. 1. Modelling of individual claims. This is possible with GM and ML. However, it is a statistical truism that enlargement of the volume of data used does not necessarily increase predictive power. Indeed, in Section 8.2 of my own contribution to this volume, I give an example where it will not. So, can we identify the circumstances in which the use of individual claims is likely to bring predictive benefit? 2. Complexity. One might reasonably guess that the answer to the previous question will be somehow related to the complexity of the dataset under analysis. In short, datasets with simple algebraic structures have simple methods of analysis, and complex datasets have more complex methods, and possibly individual claims. So, can we design a metric of data complexity (perhaps based on relative entropy or similar) that could be used to triage datasets? 3. Predictive gain. In cases where some predictive gain is found, say reduced prediction error or more granular reserving or some other form of GM/ML supremacy, what exactly is the gain Risks 2020, 8, 1; doi:10.3390/risks8010001 www.mdpi.com/journal/risks Risks 2020, 8, 1 2 of 2 in quantitative terms, and are there any general indications of the circumstances in which it might occur? 4. Interpretability. Explainable neural nets (NNs) have entered the literature. These structured NN outputs so as to increase their interpretability. Even so, the results are not always quite transparent. Can we define alternative constraints in the form of output so as to enhance interpretability further? 5. Interpretability (continued). In any case, to what extent is interpretability paramount? Can we define circumstances in which it is essential, and others where it does not matter? The present volume commences with two articles on loss reserving at the individual claim level, in each case using a form of machine learning. De Felice and Moriconi (2019) use CART (Classification And Regression Trees) together with some granular features, whereas Duval and Pigeon (2019) use gradient boosting. These are followed by two articles on neural networks. Kuo (2019) apples deep learning to claim triangles, but with multi-triangle input and other input features. Then, Poon (2019) is concerned with the issue of interpretability, applying an unexplainability penalty to the neural network. Finally, my own contribution (Taylor 2019) discusses the merits and demerits of GM and ML models, and compares the two families. Funding: This research received funding assistance from the Australian Research Council, grant number LP130100723. Conflicts of Interest: The author declares no conflict of interest. References De Felice, Massimo, and Franco Moriconi. 2019. Claim Watching and Individual Claims Reserving Using Classification and Regression Trees. Risks 7: 102. [CrossRef] Duval, Francis, and Mathieu Pigeon. 2019. Individual Loss Reserving Using a Gradient Boosting-Based Approach. Risks 7: 79. Kuo, Kevin. 2019. DeepTriangle: A Deep Learning Approach to Loss Reserving. Risks 7: 97. [CrossRef] Poon, Jacky HL. 2019. Penalising Unexplainability in Neural Networks for Predicting Payments per Claim Incurred. Risks 7: 95. [CrossRef] Taylor, Greg. 2019. Loss Reserving Models: Granular and Machine Learning Forms. Risks 7: 82. [CrossRef] © 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Risks Multidisciplinary Digital Publishing Institute

Risks Special Issue on “Granular Models and Machine Learning Models”

Risks , Volume 8 (1) – Dec 30, 2019

Loading next page...
 
/lp/multidisciplinary-digital-publishing-institute/risks-special-issue-on-granular-models-and-machine-learning-models-wjDQKXzZpD

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Multidisciplinary Digital Publishing Institute
Copyright
© 1996-2020 MDPI (Basel, Switzerland) unless otherwise stated Terms and Conditions Privacy Policy
ISSN
2227-9091
DOI
10.3390/risks8010001
Publisher site
See Article on Publisher Site

Abstract

risks Editorial Risks Special Issue on “Granular Models and Machine Learning Models” Greg Taylor School of Risk and Actuarial Studies, University of New South Wales, Kensington, NSW 2052, Australia; greg_taylor60@hotmail.com Received: 8 December 2019; Accepted: 13 December 2019; Published: 30 December 2019 It is probably fair to date loss reserving by means of claim modelling from the late 1960s. For much of the 50 years since then, the models have either remained algebraically simple, or have incrementally proceeded to greater algebraic complexity. Much of the limitation on model sophistication has derived from computing limitations. As computing power has increased, so has model complexity and sophistication. At some point on this journey, the modelling of the claim process of individual claims in detail (granular modelling (GM)) became feasible. More recently, machine learning (ML) has increasingly found its way into the literature. Again, this may (though not necessarily will) target individual claims. The two approaches stand in stark contrast with each other in at least one respect. In my own contribution to the present volume, I refer to them as the Watchmaker (GM) and the Oracle (ML), the one being concerned with ever more detailed and minute modelling, and the other with incisive generalizations about data on the basis of reasoning that may be obscure or even impenetrable. The appearance of two (relatively) new approaches to the estimation of individual claim loss reserves immediately creates a tension between them, with natural questions about their relative performances. But the issue is greater than this. There are also questions about the performance of new versus old models. I have no doubt that some of these new approaches will prove useful in future, and quite possibly dominate all others. For the present, however, their status is, in my view, unproven. The research record contains a number of papers in this field, but some of them consist of an application to a single dataset with little in the way of general conclusions or indication of the extent to which the results could be extrapolated to other datasets. The consequence is a fragmented research record, leaving open questions about the general applicability of GM and ML. Some of the (to my mind) landmark questions requiring answer are the following. 1. Modelling of individual claims. This is possible with GM and ML. However, it is a statistical truism that enlargement of the volume of data used does not necessarily increase predictive power. Indeed, in Section 8.2 of my own contribution to this volume, I give an example where it will not. So, can we identify the circumstances in which the use of individual claims is likely to bring predictive benefit? 2. Complexity. One might reasonably guess that the answer to the previous question will be somehow related to the complexity of the dataset under analysis. In short, datasets with simple algebraic structures have simple methods of analysis, and complex datasets have more complex methods, and possibly individual claims. So, can we design a metric of data complexity (perhaps based on relative entropy or similar) that could be used to triage datasets? 3. Predictive gain. In cases where some predictive gain is found, say reduced prediction error or more granular reserving or some other form of GM/ML supremacy, what exactly is the gain Risks 2020, 8, 1; doi:10.3390/risks8010001 www.mdpi.com/journal/risks Risks 2020, 8, 1 2 of 2 in quantitative terms, and are there any general indications of the circumstances in which it might occur? 4. Interpretability. Explainable neural nets (NNs) have entered the literature. These structured NN outputs so as to increase their interpretability. Even so, the results are not always quite transparent. Can we define alternative constraints in the form of output so as to enhance interpretability further? 5. Interpretability (continued). In any case, to what extent is interpretability paramount? Can we define circumstances in which it is essential, and others where it does not matter? The present volume commences with two articles on loss reserving at the individual claim level, in each case using a form of machine learning. De Felice and Moriconi (2019) use CART (Classification And Regression Trees) together with some granular features, whereas Duval and Pigeon (2019) use gradient boosting. These are followed by two articles on neural networks. Kuo (2019) apples deep learning to claim triangles, but with multi-triangle input and other input features. Then, Poon (2019) is concerned with the issue of interpretability, applying an unexplainability penalty to the neural network. Finally, my own contribution (Taylor 2019) discusses the merits and demerits of GM and ML models, and compares the two families. Funding: This research received funding assistance from the Australian Research Council, grant number LP130100723. Conflicts of Interest: The author declares no conflict of interest. References De Felice, Massimo, and Franco Moriconi. 2019. Claim Watching and Individual Claims Reserving Using Classification and Regression Trees. Risks 7: 102. [CrossRef] Duval, Francis, and Mathieu Pigeon. 2019. Individual Loss Reserving Using a Gradient Boosting-Based Approach. Risks 7: 79. Kuo, Kevin. 2019. DeepTriangle: A Deep Learning Approach to Loss Reserving. Risks 7: 97. [CrossRef] Poon, Jacky HL. 2019. Penalising Unexplainability in Neural Networks for Predicting Payments per Claim Incurred. Risks 7: 95. [CrossRef] Taylor, Greg. 2019. Loss Reserving Models: Granular and Machine Learning Forms. Risks 7: 82. [CrossRef] © 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Journal

RisksMultidisciplinary Digital Publishing Institute

Published: Dec 30, 2019

There are no references for this article.