Extension Theory
Extension theory was first proposed by Cai in 1983 to solve contradictory problems. While classical mathematics is familiar with quantity and forms of objects, extension theory transforms these objects to matter-element models.Extension Neural Network
Extension neural network has a neural network like appearance. Weight vector resides between the input nodes and output nodes. Output nodes are the representation of input nodes by passing them through the weight vector. There are total number of input and output nodes are represented by and , respectively. These numbers depend on the number of characteristics and classes. Rather than using one weight value between two layer nodes as in neural network, extension neural network architecture has two weight values. In extension neural network architecture, for instance , is the input which belongs to class and is the corresponding output for class . The output is calculated by using extension distance as shown in equation 6. Estimated class is found through searching for the minimum extension distance among the calculated extension distance for all classes as summarized in equation 7, where is the estimated class.Learning Algorithm
Each class is composed of ranges of characteristics. These characteristics are the input types or names which come from matter-element model. Weight values in extension neural network represent these ranges. In the learning algorithm, first weights are initialized by searching for the maximum and minimum values of inputs for each class as shown in equation 8 where, is the instance number and is represents number of input. This initialization provides classes' ranges according to given training data. After maintaining weights, center of clusters are found through the equation 9. Before learning process begins, predefined learning performance rate is given as shown in equation 10 where, is the misclassified instances and is the total number of instances. Initialized parameters are used to classify instances with using equation 6. If the initialization is not sufficient due to the learning performance rate, training is required. In the training step weights are adjusted to classify training data more accurately, therefore reducing learning performance rate is aimed. In each iteration, is checked to control if required learning performance is reached. In each iteration every training instance is used for training.References
# # Kuei-Hsiang Chao, Meng-Hui Wang, and Chia-Chang Hsu. A novel residual capacity estimation method based on extension neural network for lead-acid batteries. International Symposium on Neural Networks, pages 1145–1154, 2007 # Kuei-Hsiang Chao, Meng-Hui Wang, Wen-Tsai Sung, and Guan-Jie Huang. Using enn-1 for fault recognition of automotive engine. Expert Systems with Applications, 37(4):29432947, 2010 # # # Juncai Zhang, Xu Qian, Yu Zhou, and Ai Deng. Condition monitoring method of the equipment based on extension neural network. Chinese Control and Decision Conference, pages 1735–1740, 2010 # {{Cite journal , last1 = Wang , first1 = M. , last2 = Hung , first2 = C. P. , title = Extension neural network and its applications , doi = 10.1016/S0893-6080(03)00104-7 , journal = Neural Networks , volume = 16 , issue = 5–6 , pages = 779–784 , year = 2003 , pmid = 12850034 Artificial neural networks