What's Hot

Nassim Nicholas Taleb's blog, an inspiring read | Incerto


Thursday, May 24, 2012

Modelling Loss Data

In our previous post on Loss Event Data, we discussed the types of fields and specific risk framework elements that need to be in place for a best practice Loss Data Repository and you can follow this [Link] for a recap. In this third series on the Loss Data Monte Carlo debate (this is turning into a bit of a tome on data modelling), we take look at the types of techniques that can be used for understanding Operational Risk Loss Data better.
  
There are 14 key models that have been listed in this post and brief summaries, as well as the purpose for each model has been supplied within.

14 Loss Models
In general, only a handful of businesses correctly capture Operational Risk Loss Data and of those that do, only a small number of risk units in these firms are modelling their risk data in a coherent manner.

After a bit of research on the internet and in various other channels, it has become relatively apparent to me that there isn't a comprehensive list of potential models which can be used for quantifying operational risk properly. I would have expected some analyst, somewhere, at some point in time to have done this work and published it. None the less, this post attempts to kick off this initiative.


At the end of the day, there is no point capturing Loss Event Information if you aren't going to do anything with it. So with all of this in mind, let us create a single list of potential statistical techniques for Operational Risk Loss Event Modelling.

Please follow this [Link] for the relevant presentation.



Causal Capital Loss Data Model Library V1.0 | 21st May 2012 [Click here for full list]

There are a few points that should be kept in mind when modelling Loss Event Data:

[1] Operational Risk analysts that do not classify or stratify their risk event data are going to find it very difficult to carry out many of the tests that have been described in the model library.

[2] If the Loss Database being used to capture event data lacks some of the fields I have itemized in the previous post [Link], then some of the tests described in this journal will be unavailable to the analyst.

[3After the risk data has been categorized and stratified, if there are only a few resulting data points in the loss database, the analyst may consider rolling up a category level in the event classification tree to capture more data points. Alternatively, merging data points from two or more categories into one classification can be an effective way to lift the number of data points required for modelling. One must remember that doing this roll up exercise will also increase the generalization of the data to be modelled.

[4] If you are a bank attempting to model loss data and perhaps if you are not, you can find an outstanding and recommended Operational Risk Category list supplied in Annex VII of the Basel II accord. In my opinion, Annex VII is a great starting point for any risk department wishing to classify its loss events cleanly and there are that many banks using Annex VII that it has become a bit of a standard.


Annex VII Basel II Accord | June 2004 
  
[5] Finally, the unique models described in our attached presentation tell different stories about the nature of the potential operational risks that a business unit faces. One model doesn't really replace another, although some model outputs are being combined together to create different perspectives on the estimation of potential loss.

1 comment:

  1. The composite model assumes different weighted distributions for the head and tail of the distribution and several such models have been introduced in the literature for Modeling insurance loss data.

    ReplyDelete