Affiliation: University of Connecticut, US
Title: Predictive Actuarial Analytics Using Tree-Based Models
Abstract: Because of its many advantages, the use of tree-based models has become an increasingly popular alternative predictive tool for building classification and regression models. Innovations to the original methods, such as random forests and gradient boosting, have further improved the capabilities of using tree-based models as a predictive model. Quanet al. (2018) examined the performance of tree-based models for the valuation of the guarantees embedded in variable annuities. We found that tree-based models are generally very efficient in producing more accurate predictions and the gradient boosting ensemble method is considered the most superior. Quanand Valdez (2018) applied multivariate tree-based models to multi-line insurance claims data with correlated responses drawn from the Wisconsin Local Government Property Insurance Fund (LGPIF). We were able to capture the inherent relationship among the response variables and improved marginal predictive accuracy. Quanet al. (2019) propose to use tree-based models with a hybrid structure as an alternative approach to the Tweedie Generalized Linear Model (GLM). This hybrid structure captures the benefits of tuning hyperparametersat each step of the algorithm thereby allowing for an improved prediction accuracy. We examined the performance of this model vis-à-vis the Tweedie GLM using the LGPIF and simulated datasets. Our empirical results indicate that this hybrid tree-based model produces more accurate predictions without loss of intuitive interpretation.
Affiliation: University of Connecticut, US
Title: Application of Random Effects in Dependent Compound Risk Model
Abstract: In ratemaking for general insurance, the calculation of a pure premium has traditionally been based on modeling both frequency and severity in an aggregated claims model. Additionally for simplicity, it has been a standard practice to assume the independence of loss frequency and loss severity. However, in recent years, there has been sporadic interest in the actuarial literature exploring models that departs from this independence. Besides, usual property and casualty insurance enables us to explore the benefits of using random effects for predicting insurance claims observed longitudinally, or over a period of time. Thus, in this article, a research work is introduced with utilizes random effects in dependent two-part model for insurance ratemaking, testing the presence of random effects via Bayesian sensitivity analysis with its own theoretical development as well as empirical results and performance measures using out-of-sample validation procedures.
Affiliation: Georgia State University, US
Title: Statistical Inference for Mortality Models
Abstract: Underwriters of annuity products and administrators of pension funds are under financial obligation to their policyholder until the death of counterparty. Hence, the underwriters are subject to longevity risk when the average lifespan of the entire population increases, and yet, such risk can be managed through hedging practices based on parametric mortality models. As a benchmark mortality model in insurance industry is Lee-Carter model, we first summarize some flaws regarding the model and inference methods derived from it. Based on these understandings we propose a modified Lee-Carter model, accompanied by a rigorous statistical inference with asymptotic results and satisfactory numerical and simulation results derived from a small sample. Then we propose bias corrected estimator which is consistent and asymptotically normally distributed regardless of the mortality index being a unit root or stationary AR(1) time series. We further extend the model to accommodate AR(2) process for mortality index, and, a bivariate dataset of U.S. mortality rates. Finally, we conclude by a detailed model validation and some discussions of potential hedging practices based on our parametric model.
Affiliation: University of Amsterdam, The Netherlands
Title: Two-Part D-Vine Copula Models for Longitudinal Insurance Claim Data
Abstract: Insurance companies keep track of each policyholder's claims over time, resulting in longitudinal data. Efficient modeling of time dependence in longitudinal claim data can improve the prediction of future claims needed, for example, for ratemaking. Insurance claim data have their special complexity. They usually follow a two-part mixed distribution: a probability mass at zero corresponding to no claim and an otherwise positive claim from a skewed and long-tailed distribution. We propose a two-part D-vine copula model to study longitudinal mixed claim data. We build two stationary D-vine copulas. One is used to model the time dependence in binary outcomes resulting from whether or not a claim has occurred, and the other studies the dependence in the claim size given occurrence over time. The proposed model can predict the probability of making claims and the quantiles of severity given occurrence straightforwardly. We use our approach to investigate a dataset from the Local Government Property Insurance Fund in the state of Wisconsin.
Affiliation: University of New South Wales, Australia
Title: Insuring Longevity Risk and Long-Term Care: Bequest, Housing and Liquidity
Abstract: The demand for life annuities and long-term care insurance (LTCI) varies among retirees with different preferences and financial profiles. This paper shows that bequest motives can enhance the demand for LTCI unless the opportunity cost of self-insurance through precautionary savings is low. This typically occurs when retirees have sufficient liquid wealth. If retirees tend to liquidate housing assets in the event of becoming disabled that requires sizeable costs, housing wealth is likely to enhance the demand for annuities and to crowd out the demand for LTCI. Cash-poor-asset-rich retirees show little interest in annuities, but they may purchase LTCI to preserve their bequests.
Zhiyi (Joey) Shen
Affiliation: University of Waterloo, Canada
Title: From Polaris Variable Annuities to Regression-based Monte Carlo
Abstract: In this talk, I will first discuss the no-arbitrage pricing of Polaris variable annuities (VAs), which were issued by the American International Group in recent years. Variable annuities are prevailing equity-linked insurance products that provide the policyholder with the flexibility of dynamic withdrawals, mortality protection, and guaranteed income payments against a market decline. The Polaris allows a shadow account to lock in the high watermark of the investment account over a monitoring period that depends on the policyholder’s choice of his/her first withdrawal time. This feature makes the insurer’s payouts depend on policyholder’s withdrawal behavioursand significantly complicates the pricing problem. By prudently introducing certain auxiliary state variables, we manage to formulate the pricing problem into solving a convoluted stochastic optimal control framework and developing a computationally efficient algorithm to approach the solution.
Driven by the challenges from the pricing Polaris VAs, in the second part of the talk, I will introduce a regression-based MonteCarlo algorithm, which we propose to solve a class of general stochastic optimal control problems numerically. The algorithm has three pillars: a construction of auxiliary stochastic control model, an artificial simulation of the post-action value of state process, and a shape-preserving sieve estimation method. The algorithm enjoys many merits, including obviating forward simulation and control randomization, eliminating in-sample bias, evading extrapolating the value function, and alleviating the computational burden of the tuning parameter selection.
This talk is based on two joint works with Chengguo Weng from the University of Waterloo.
Tsz Chai “Samson” Fung
Affiliation: University of Toronto, Canada
Title: Mixture of Experts Regression Models for Insurance Ratemaking and Reserving
Abstract: Understanding the effect of policyholders' risk profile on the number and the amount of claims, as well as the dependence among different types of claims, are critical to insurance ratemaking and IBNR-type reserving. To accurately quantify such features, it is essential to develop a regression model which is flexible, interpretable and statistically tractable.
In this presentation, I will discuss a highly flexible nonlinear regression model we have recently developed, namely the logit-weighted reduced mixture of experts (LRMoE) models, for multivariate claim frequencies or severities distributions. The LRMoEmodel is interpretable as it has two components: Gating functions to classify policyholders into various latent sub-classes and Expert functions to govern the distributional properties of the claims. The model is also flexible to fit any types of claim data accurately and hence minimize the issue of model selection.
Model implementation is illustrated in two ways using a real automobile insurance dataset from a major European insurance company. We first fit the multivariate claim frequencies using an Erlangcount expert function. Apart from showing excellent fitting results, we can interpret the fitted model in an insurance perspective and visualize the relationship between policyholders' information and their risk level. We further demonstrate how the fitted model may be useful for insurance ratemaking. The second illustration deals with insurance loss severity data that often exhibits heavy-tail behavior. Using a Transformed Gamma expert function, our model is applicable to fit the severity and reporting delay components of the dataset, which is ultimately shown to be useful and crucial for an adequate prediction of IBNR reserve.
This project is joint work with Andrei Badescuand Sheldon Lin.