Bühlmann model
   HOME

TheInfoList



OR:

In
credibility theory Credibility theory is a form of statistical inference used to forecast an uncertain future event developed by Thomas Bayes. It is employed to combine multiple estimates into a summary estimate that takes into account information on the accuracy o ...
, a branch of study in actuarial science, the Bühlmann model is a
random effects model In statistics, a random effects model, also called a variance components model, is a statistical model where the model parameters are random variables. It is a kind of hierarchical linear model, which assumes that the data being analysed are dra ...
(or "variance components model" or
hierarchical linear model Multilevel models (also known as hierarchical linear models, linear mixed-effect model, mixed models, nested data models, random coefficient, random-effects models, random parameter models, or split-plot designs) are statistical models of parame ...
) used to determine the appropriate
premium Premium may refer to: Marketing * Premium (marketing), a promotional item that can be received for a small fee when redeeming proofs of purchase that come with or on retail products * Premium segment, high-price brands or services in marketing, ...
for a group of insurance contracts. The model is named after Hans Bühlmann who first published a description in 1967.


Model description

Consider ''i'' risks which generate random losses for which historical data of ''m'' recent claims are available (indexed by ''j''). A premium for the ''i''th risk is to be determined based on the expected value of claims. A linear estimator which minimizes the mean square error is sought. Write * ''X''ij for the ''j''-th claim on the ''i''-th risk (we assume that all claims for ''i''-th risk are
independent and identically distributed In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usua ...
) * \scriptstyle =\frac\sum_^X_ for the average value. * \Theta_i - the parameter for the distribution of the i-th risk * m(\vartheta)= \operatorname E\left \Theta_i = \vartheta\right /math> * \Pi=\operatorname E(m(\vartheta), X_,X_,...X_) - premium for the i-th risk * \mu = \operatorname E(m(\vartheta)) * s^2(\vartheta)=\operatorname\left \Theta_i = \vartheta\right /math> * \sigma^2=\operatorname E\left s^2(\vartheta) \right /math> * v^2=\operatorname\left m(\vartheta) \right /math> Note: m(\vartheta) and s^2(\vartheta) are functions of random parameter \vartheta The Bühlmann model is the solution for the problem: :\underset \operatorname E\left \left ( a_+\sum_^a_X_-\Pi \right)^2\right /math> where a_+\sum_^a_X_ is the estimator of premium \Pi and
arg min In mathematics, the arguments of the maxima (abbreviated arg max or argmax) are the points, or elements, of the domain of some function at which the function values are maximized.For clarity, we refer to the input (''x'') as ''points'' and t ...
represents the parameter values which minimize the expression.


Model solution

The solution for the problem is: :Z\bar_i+(1-Z)\mu where: :Z=\frac We can give this result the interpretation, that Z part of the premium is based on the information that we have about the specific risk, and (1-Z) part is based on the information that we have about the whole population.


Proof

The following proof is slightly different from the one in the original paper. It is also more general, because it considers all linear estimators, while original proof considers only estimators based on average claim.Proof can be found on this site: :Lemma. The problem can be stated alternatively as: ::f=\mathbb E\left \left ( a_+\sum_^a_X_-m(\vartheta)\right )^2\right to \min Proof: :\begin \mathbb E\left \left ( a_+\sum_^a_X_-m(\vartheta)\right )^2\right &=\mathbb E\left \left ( a_+\sum_^a_X_-\Pi\right )^2\right \mathbb E\left \left ( m(\vartheta)-\Pi\right )^2\right 2\mathbb \left \left ( a_+\sum_^a_X_-\Pi\right ) \left ( m(\vartheta)-\Pi\right )\right \\ &=\mathbb E\left \left ( a_+\sum_^a_X_-\Pi\right )^2\right \mathbb E\left \left ( m(\vartheta)-\Pi\right )^2\right \end The last equation follows from the fact that :\begin \mathbb E\left left ( a_+\sum_^a_X_-\Pi\right ) \left ( m(\vartheta)-\Pi\right )\right &= \mathbb E_\left __X_,\ldots_,X_\right_right_.html" ;"title=" \left ( a_+\sum_^a_X_-\Pi\right ) ( m(\vartheta)-\Pi) \right "> X_,\ldots ,X_\right right "> \left ( a_+\sum_^a_X_-\Pi\right ) ( m(\vartheta)-\Pi) \right "> X_,\ldots ,X_\right right \\ &=\mathbb E_\left __X_,\ldots_,X_\right_right_.html" ;"title=" \mathbb E_X\left __X_,\ldots_,X_\right_right_">_\mathbb_E_X\left_[(_m(\vartheta)-\Pi)_">__X_,\ldots_,X_\right_right_\right.html" ;"title=" m(\vartheta)-\Pi) "> X_,\ldots ,X_\right right "> \mathbb E_X\left __X_,\ldots_,X_\right_right_\right">_m(\vartheta)-\Pi)_">__X_,\ldots_,X_\right_right_">_\mathbb_E_X\left_[(_m(\vartheta)-\Pi)_">__X_,\ldots_,X_\right_right_\right\ &=0 \end We_are_using_here_the_law_of_total_expectation_and_the_fact,_that_\Pi=\mathbb_E_[m(\vartheta).html" ;"title=" m(\vartheta)-\Pi) "> X_,\ldots ,X_\right right \right"> m(\vartheta)-\Pi) "> X_,\ldots ,X_\right right "> \mathbb E_X\left [( m(\vartheta)-\Pi) "> X_,\ldots ,X_\right right \right\ &=0 \end We are using here the law of total expectation and the fact, that \Pi=\mathbb E [m(\vartheta)">X_,\ldots, X_ In our previous equation, we decompose minimized function in the sum of two expressions. The second expression does not depend on parameters used in minimization. Therefore, minimizing the function is the same as minimizing the first part of the sum. Let us find critical points of the function :\frac\frac=\mathbb E\left [a_+\sum_^a_X_-m(\vartheta)\right ]=a_+\sum_^a_\mathbb E(X_)-\mathbb E(m(\vartheta))=a_+\left (\sum_^a_-1 \right )\mu :a_=\left (1- \sum_^a_ \right )\mu For k\neq 0 we have: :\frac\frac=\mathbb E\left X_\left ( a_ +\sum_^a_X_-m(\vartheta)\right ) \right \mathbb E\left X_ \right _+\sum_^a_\mathbb E
_X_ X, or x, is the 24th and third-to-last letter in the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is ''"ex"'' (pronounced ), plural ''e ...
a_\mathbb E ^2_\mathbb E _m(\vartheta)0 We can simplify derivative, noting that: :\begin \mathbb E
_X_ X, or x, is the 24th and third-to-last letter in the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is ''"ex"'' (pronounced ), plural ''e ...
& =\mathbb E \left \vartheta\right_.html"_;"title="_X_.html"_;"title="mathbb_E_[X_X_">\vartheta\right_">_X_.html"_;"title="mathbb_E_[X_X_">\vartheta\right_\mathbb_E[\text(X_X_.html" ;"title="_X_">\vartheta\right_.html" ;"title="_X_.html" ;"title="mathbb E \vartheta\right_">_X_.html"_;"title="mathbb_E_[X_X_">\vartheta\right_\mathbb_E[\text(X_X_">\vartheta)+\mathbb_E(X_.html" ;"title="_X_">\vartheta\right ">_X_.html" ;"title="mathbb E \vartheta\right_\mathbb_E[\text(X_X_">\vartheta)+\mathbb_E(X_">\vartheta)\mathbb_E(X_.html" ;"title="_X_">\vartheta\right \mathbb E \vartheta)+\mathbb_E(X_">\vartheta)\mathbb_E(X_">\vartheta)\mathbb_E[(m(\vartheta))^2v^2+\mu^2_\\ \mathbb_E[X^2_&=_\mathbb_E_\left_[\mathbb_E[X^2_.html" ;"title="text(X_X_">\vartheta)+\mathbb E(X_">\vartheta)\mathbb E(X_">\vartheta)\mathbb E[(m(\vartheta))^2v^2+\mu^2 \\ \mathbb E[X^2_&= \mathbb E \left [\mathbb E[X^2_">\vartheta\right ]=\mathbb E[s^2(\vartheta)+(m(\vartheta))^2]=\sigma^2+v^2+\mu^2 \\ \mathbb E[X_m(\vartheta)] & =\mathbb E[\mathbb E[X_m(\vartheta), \Theta_i]=\mathbb E[(m(\vartheta))^2]=v^2+\mu^2 \end Taking above equations and inserting into derivative, we have: :\frac\frac=\left ( 1-\sum_^a_ \right )\mu^2+\sum_^a_(v^2+\mu^2)+a_(\sigma^2+v^2+\mu^2)-(v^2+\mu^2)=a_\sigma^2-\left ( 1-\sum_^a_ \right )v^2=0 :\sigma^2a_=v^2\left (1-\sum_^ a_\right) Right side doesn't depend on ''k''. Therefore, all a_ are constant :a_= \cdots =a_=\frac From the solution for a_ we have :a_=(1-ma_)\mu=\left ( 1-\frac \right )\mu Finally, the best estimator is :a_+\sum_^a_X_=\frac\bar+\left ( 1-\frac \right )\mu=Z\bar+(1-Z)\mu


References


Citations


Sources

* {{DEFAULTSORT:Buhlmann Model Actuarial science Analysis of variance