HOME

TheInfoList



OR:

In the mathematical theory of decisions, decision-theoretic rough sets (DTRS) is a probabilistic extension of
rough set In computer science, a rough set, first described by Polish computer scientist Zdzisław I. Pawlak, is a formal approximation of a crisp set (i.e., conventional set) in terms of a pair of sets which give the ''lower'' and the ''upper'' approxima ...
classification. First created in 1990 by Dr. Yiyu Yao, the extension makes use of loss functions to derive \textstyle \alpha and \textstyle \beta region parameters. Like rough sets, the lower and upper approximations of a set are used.


Definitions

The following contains the basic principles of decision-theoretic rough sets.


Conditional risk

Using the Bayesian decision procedure, the decision-theoretic rough set (DTRS) approach allows for minimum-risk decision making based on observed evidence. Let \textstyle A=\ be a finite set of \textstyle m possible actions and let \textstyle \Omega=\ be a finite set of s states. \textstyle P(w_j\mid is calculated as the conditional probability of an object \textstyle x being in state \textstyle w_j given the object description \textstyle /math>. \textstyle \lambda(a_i\mid w_j) denotes the loss, or cost, for performing action \textstyle a_i when the state is \textstyle w_j. The expected loss (conditional risk) associated with taking action \textstyle a_i is given by: : R(a_i\mid = \sum_^s \lambda(a_i\mid w_j)P(w_j\mid . Object classification with the approximation operators can be fitted into the Bayesian decision framework. The set of actions is given by \textstyle A=\, where \textstyle a_P, \textstyle a_N, and \textstyle a_B represent the three actions in classifying an object into POS(\textstyle A), NEG(\textstyle A), and BND(\textstyle A) respectively. To indicate whether an element is in \textstyle A or not in \textstyle A, the set of states is given by \textstyle \Omega=\. Let \textstyle \lambda(a_\diamond\mid A) denote the loss incurred by taking action \textstyle a_\diamond when an object belongs to \textstyle A, and let \textstyle \lambda(a_\diamond\mid A^c) denote the loss incurred by take the same action when the object belongs to \textstyle A^c.


Loss functions

Let \textstyle \lambda_ denote the loss function for classifying an object in \textstyle A into the POS region, \textstyle \lambda_ denote the loss function for classifying an object in \textstyle A into the BND region, and let \textstyle \lambda_ denote the loss function for classifying an object in \textstyle A into the NEG region. A loss function \textstyle \lambda_ denotes the loss of classifying an object that does not belong to \textstyle A into the regions specified by \textstyle \diamond. Taking individual can be associated with the expected loss \textstyle R(a_\diamond\mid actions and can be expressed as: : \textstyle R(a_P\mid = \lambda_P(A\mid + \lambda_P(A^c\mid , : \textstyle R(a_N\mid = \lambda_P(A\mid + \lambda_P(A^c\mid , : \textstyle R(a_B\mid = \lambda_P(A\mid + \lambda_P(A^c\mid , where \textstyle \lambda_=\lambda(a_\diamond\mid A), \textstyle \lambda_=\lambda(a_\diamond\mid A^c), and \textstyle \diamond=P, \textstyle N, or \textstyle B.


Minimum-risk decision rules

If we consider the loss functions \textstyle \lambda_ \leq \lambda_ < \lambda_ and \textstyle \lambda_ \leq \lambda_ < \lambda_, the following decision rules are formulated (''P'', ''N'', ''B''): * P: If \textstyle P(A\mid \geq \gamma and \textstyle P(A\mid \geq \alpha, decide POS(\textstyle A); * N: If \textstyle P(A\mid \leq \beta and \textstyle P(A\mid \leq \gamma, decide NEG(\textstyle A); * B: If \textstyle \beta \leq P(A\mid \leq \alpha, decide BND(\textstyle A); where, : \alpha = \frac, : \gamma = \frac, : \beta = \frac. The \textstyle \alpha, \textstyle \beta, and \textstyle \gamma values define the three different regions, giving us an associated risk for classifying an object. When \textstyle \alpha > \beta, we get \textstyle \alpha > \gamma > \beta and can simplify (''P'', ''N'', ''B'') into (''P''1, ''N''1, ''B''1): * P1: If \textstyle P(A\mid \geq \alpha, decide POS(\textstyle A); * N1: If \textstyle P(A\mid \leq \beta, decide NEG(\textstyle A); * B1: If \textstyle \beta < P(A\mid < \alpha, decide BND(\textstyle A). When \textstyle \alpha = \beta = \gamma, we can simplify the rules (P-B) into (P2-B2), which divide the regions based solely on \textstyle \alpha: * P2: If \textstyle P(A\mid > \alpha, decide POS(\textstyle A); * N2: If \textstyle P(A\mid < \alpha, decide NEG(\textstyle A); * B2: If \textstyle P(A\mid = \alpha, decide BND(\textstyle A). Data mining,
feature selection In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construc ...
, information retrieval, and classifications are just some of the applications in which the DTRS approach has been successfully used.


See also

* Rough sets *
Granular computing Granular computing (GrC) is an emerging computing paradigm of information processing that concerns the processing of complex information entities called "information granules", which arise in the process of data abstraction and derivation of knowl ...
*
Fuzzy set theory In mathematics, fuzzy sets (a.k.a. uncertain sets) are sets whose elements have degrees of membership. Fuzzy sets were introduced independently by Lotfi A. Zadeh in 1965 as an extension of the classical notion of set. At the same time, defined a ...


References


External links


The International Rough Set Society

The Decision-theoretic Rough Set Portal
{{DEFAULTSORT:Decision-Theoretic Rough Sets Decision theory