Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-25T19:53:51.394Z Has data issue: false hasContentIssue false

A dynamic semisupervised feedforward neural network clustering

Published online by Cambridge University Press:  03 May 2016

Roya Asadi*
Affiliation:
Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, Malaysia
Sameem Abdul Kareem
Affiliation:
Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, Malaysia
Shokoofeh Asadi
Affiliation:
Department of Agricultural Management Engineering, Faculty of Ebne-Sina, University of Science and Research Branch, Tehran, Iran
Mitra Asadi
Affiliation:
Department of Research, Iranian Blood Transfusion Organization, Tehran, Iran
*
Reprint requests to: Roya Asadi, Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, 60503, Selangor, Malaysia. E-mail: royaasadi@siswa.um.edu.my

Abstract

An efficient single-layer dynamic semisupervised feedforward neural network clustering method with one epoch training, data dimensionality reduction, and controlling noise data abilities is discussed to overcome the problems of high training time, low accuracy, and high memory complexity of clustering. Dynamically after the entrance of each new online input datum, the code book of nonrandom weights and other important information about online data as essentially important information are updated and stored in the memory. Consequently, the exclusive threshold of the data is calculated based on the essentially important information, and the data is clustered. Then, the network of clusters is updated. After learning, the model assigns a class label to the unlabeled data by considering a linear activation function and the exclusive threshold. Finally, the number of clusters and density of each cluster are updated. The accuracy of the proposed model is measured through the number of clusters, the quantity of correctly classified nodes, and F-measure. Briefly, in order to predict the survival time, the F-measure is 100% of the Iris, Musk2, Arcene, and Yeast data sets and 99.96% of the Spambase data set from the University of California at Irvine Machine Learning Repository; and the superior F-measure results in between 98.14% and 100% accuracies for the breast cancer data set from the University of Malaya Medical Center. We show that the proposed method is applicable in different areas, such as the prediction of the hydrate formation temperature with high accuracy.

Type
Regular Articles
Copyright
Copyright © Cambridge University Press 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Abe, S. (2001). Pattern Classification: Neuro-Fuzzy Methods and Their Comparison. London: Springer–Verlag.Google Scholar
Ahirwar, G. (2014). A novel K means clustering algorithm for large datasets based on divide and conquer technique. International Journal of Computer Science and Information Technologies 5(1), 301305.Google Scholar
Alippi, C., Piuri, V., & Sami, M. (1995). Sensitivity to errors in artificial neural networks: a behavioral approach. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 42(6), 358361.Google Scholar
Andonie, R., & Kovalerchuk, B. (2007). Neural Networks for Data Mining: Constrains and Open Problems. Ellensburg, WA: Central Washington University, Computer Science Department.Google Scholar
Asadi, R., Asadi, M., & Sameem, A.K. (2014). An efficient semisupervised feed forward neural network clustering. Artificial Intelligence for Engineering Design, Analysis and Manufacturing. Advance online publication. doi:10.1017/S0890060414000675Google Scholar
Asadi, R., & Kareem, S.A. (2014). Review of feed forward neural network classification preprocessing techniques. Proc. 3rd Int. Conf. Mathematical Sciences (ICMS3), pp. 567–573, Kuala Lumpur, Malaysia.Google Scholar
Asadi, R., Sabah Hasan, H., & Abdul Kareem, S. (2014 a). Review of current online dynamic unsupervised feed forward neural network classification. International Journal of Artificial Intelligence and Neural Networks 4(2), 12.Google Scholar
Asadi, R., Sabah Hasan, H., & Abdul Kareem, S. (2014 b). Review of current online dynamic unsupervised feed forward neural network classification. Proc. Computer Science and Electronics Engineering (CSEE). Kuala Lumpur, Malaysia.Google Scholar
Asuncion, A., & Newman, D. (2007). UCI Machine Learning Repository. Irvine, CA: University of California, School of Information and Computer Science. Accessed at http://www.ics.uci.edu/~mlearn/MLRepositoryGoogle Scholar
Bengio, Y., Buhmann, J.M., Embrechts, M., & Zurada, M. (2000). Introduction to the special issue on neural networks for data mining and knowledge discovery. IEEE Transactions on Neural Networks 11(3), 545549.Google Scholar
Bose, N.K., & Liang, P. (1996). Neural Network Fundamentals With Graphs, Algorithms, and Applications. New York: McGraw–Hill.Google Scholar
Bouchachia, A.B., Gabrys, B., & Sahel, Z. (2007). Overview of some incremental learning algorithms. Proc. Fuzzy Systems Conf. Fuzz-IEEE.Google Scholar
Craven, M.W., & Shavlik, J.W. (1997). Using neural networks for data mining. Future Generation Computer Systems 13(2), 211229.Google Scholar
Dasarthy, B.V. (1990). Nearest Neighbor Pattern Classification Techniques. Los Alamitos, CA: IEEE Computer Society Press.Google Scholar
Davy, H. (1811). The Bakerian Lecture: on some of the combinations of oxymuriatic gas and oxygene, and on the chemical relations of these principles, to inflammable bodies. Philosophical Transactions of the Royal Society of London 101, 135.Google Scholar
DeMers, D., & Cottrell, G. (1993). Non-linear dimensionality reduction. Advances in Neural Information Processing Systems 36(1), 580.Google Scholar
Demuth, H., Beale, M., & Hagan, M. (2008). Neural Network Toolbox TM 6: User's Guide. Natick, MA: Math Works.Google Scholar
Deng, D., & Kasabov, N. (2003). On-line pattern analysis by evolving self-organizing maps. Neurocomputing 51, 87103.Google Scholar
Du, K.L. (2010). Clustering: A neural network approach. Neural Networks 23(1), 89107.Google Scholar
Eslamimanesh, A., Mohammadi, A.H., & Richon, D. (2012). Thermodynamic modeling of phase equilibria of semi-clathrate hydrates of CO2, CH4, or N2+ tetra-n-butylammonium bromide aqueous solution. Chemical Engineering Science 81, 319328.Google Scholar
Fisher, R. (1950). The Use of Multiple Measurements in Taxonomic Problems: Contributions to Mathematical Statistics (Vol. 2). New York: Wiley. (Original work published 1936)Google Scholar
Fritzke, B. (1995). A growing neural gas network learns topologies. Advances in Neural Information Processing Systems 7, 625632.Google Scholar
Fritzke, B. (1997). Some Competitive Learning Methods. Dresden: Dresden University of Technology, Artificial Intelligence Institute.Google Scholar
Furao, S., Ogura, T., & Hasegawa, O. (2007). An enhanced self-organizing incremental neural network for online unsupervised learning. Neural Networks 20(8), 893903.Google Scholar
Germano, T. (1999). Self-organizing maps. Accessed at http://davis.wpi.edu/~matt/courses/somsGoogle Scholar
Ghavipour, M., Ghavipour, M., Chitsazan, M., Najibi, S.H., & Ghidary, S.S. (2013). Experimental study of natural gas hydrates and a novel use of neural network to predict hydrate formation conditions. Chemical Engineering Research and Design 91(2), 264273.Google Scholar
Goebel, M., & Gruenwald, L. (1999). A survey of data mining and knowledge discovery software tools. ACM SIGKDD Explorations Newsletter 1(1), 2033.Google Scholar
Gui, V., Vasiu, R., & Bojković, Z. (2001). A new operator for image enhancement. Facta universitatis-series: Electronics and Energetics 14(1), 109117.Google Scholar
Guyon, I. (2003). Design of experiments of the NIPS 2003 variable selection benchmark. Proc. NIPS 2003 Workshop on Feature Extraction and Feature Selection. Whistler, BC, Canada, December 11–13.Google Scholar
Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research 3, 11571182.Google Scholar
Hamker, F.H. (2001). Life-long learning cell structures—continuously learning without catastrophic interference. Neural Networks 14(4–5), 551573.Google Scholar
Han, J., & Kamber, M. (2006). Data Mining, Southeast Asia Edition: Concepts and Techniques. San Francisco, CA: Morgan Kaufmann.Google Scholar
Haykin, S. (2004). Neural Networks: A Comprehensive Foundation, Vol. 2. Upper Saddle River, NJ: Prentice Hall.Google Scholar
Hazlina, H., Sameem, A., NurAishah, M., & Yip, C. (2004). Back propagation neural network for the prognosis of breast cancer: comparison on different training algorithms. Proc. 2nd. Int. Conf. Artificial Intelligence in Engineering & Technology, pp. 445–449, Sabah, Malyasia, August 3–4.Google Scholar
Hebb, D.O. (1949). The Organization of Behavior: A Neuropsychological Approach, Vol. 1., pp. 143150. New York: Wiley.Google Scholar
Hebboul, A., Hacini, M., & Hachouf, F. (2011). An incremental parallel neural network for unsupervised classification. Proc. 7th Int. Workshop on Systems, Signal Processing Systems and Their Applications (WOSSPA), Tipaza, Algeria, May 9–11, 2011.Google Scholar
Hegland, M. (2003). Data Mining—Challenges, Models, Methods and Algorithms. Canberra, Australia: Australia National University, ANU Data Mining Group.Google Scholar
Hinton, G.E., & Salakhutdinov, R.R. (2006). Reducing the dimensionality of data with neural networks. Science 313(5786), 504.Google Scholar
Honkela, T. (1998). Description of Kohonen's self-organizing map. Accessed at http://www.cis.hut.fi/~tho/thesisGoogle Scholar
Jacquier, E., Kane, A., & Marcus, A.J. (2003). Geometric or arithmetic mean: a reconsideration. Financial Analysts Journal 59(6), 4653.Google Scholar
Jain, A.K. (2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters 31(8), 651666.Google Scholar
Jean, J.S., & Wang, J. (1994). Weight smoothing to improve network generalization. IEEE Transactions on Neural Networks 5(5), 752763.Google Scholar
Jolliffe, I.T. (1986). Principal Component Analysis. Springer Series in Statistics, pp. 1–7. New York: Springer.Google Scholar
Kamiya, Y., Ishii, T., Furao, S., & Hasegawa, O. (2007). An online semi-supervised clustering algorithm based on a self-organizing incremental neural network. Proc. Int. Joint Conf. Neural Networks (IJCNN). Piscataway, NJ: IEEE.Google Scholar
Kantardzic, M. (2011). Data Mining: Concepts, Models, Methods, and Algorithms. Hoboken, NJ: Wiley–Interscience.Google Scholar
Kasabov, N.K. (1998). ECOS: evolving connectionist systems and the ECO learning paradigm. Proc. 5th Int. Conf. Neural Information Processing, ICONIP'98, Kitakyushu, Japan.Google Scholar
Kemp, R.A., MacAulay, C., Garner, D., & Palcic, B. (1997). Detection of malignancy associated changes in cervical cell nuclei using feed-forward neural networks. Journal of the European Society for Analytical Cellular Pathology 14(1), 3140.Google Scholar
Kobayashi, R., Song, K.Y., & Sloan, E.D. (1987). Phase behavior of water/hydrocarbon systems. In Petroleum Engineering Handbook (Bradley, H.B., Ed.), chap. 25. Richardson, TX: Society of Petroleum Engineers.Google Scholar
Kohonen, T. (1997). Self-Organizing Maps, Springer Series in Information Sciences Vol. 30, pp. 2225. Berlin: Springer–Verlag.Google Scholar
Kohonen, T. (2000). Self-Organization Maps, 3rd ed. Berlin: Springer–Verlag.Google Scholar
Larochelle, H., Bengio, Y., Louradour, J., & Lamblin, P. (2009). Exploring strategies for training deep neural networks. Journal of Machine Learning Research 10, 140.Google Scholar
Laskowski, K., & Touretzky, D. (2006). Hebbian learning, principal component analysis, and independent component analysis. Artificial neural networks. Accessed at http://www.cs.cmu.edu/afs/cs/academic/class/15782-f06/slides/hebbpca.pdfGoogle Scholar
Linde, Y., Buzo, A., & Gray, R. (1980). An algorithm for vector quantizer design. IEEE Transactions on Communications 28(1), 8495.Google Scholar
Longadge, M.R., Dongre, M.S.S., & Malik, L. (2013). Multi-cluster based approach for skewed data in data mining. Journal of Computer Engineering 12(6), 6673.Google Scholar
Mangat, V., & Vig, R. (2014). Novel associative classifier based on dynamic adaptive PSO: application to determining candidates for thoracic surgery. Expert Systems With Applications 41(18), 82348244.Google Scholar
Martinetz, T.M. (1993). Competitive Hebbian learning rule forms perfectly topology preserving maps. Proc. ICANN'93, pp. 427434. London: Springer.Google Scholar
Mathworks. (2008). Matlab Neural Network Toolbox. Accessed at http://www.mathworks.comGoogle Scholar
McCloskey, S. (2000). Neural networks and machine learning. Accessed at http://www.cim.mcgill.ca/~scott/RIT/research_project.htmlGoogle Scholar
Melek, W.W., & Sadeghian, A. (2009). A theoretic framework for intelligent expert systems in medical encounter evaluation. Expert Systems 26(1), 8299.Google Scholar
Moradi, M.R., Nazari, K., Alavi, S., & Mohaddesi, M. (2013). Prediction of equilibrium conditions for hydrate formation in binary gaseous systems using artificial neural networks. Energy Technology 1(2–3), 171176.Google Scholar
Oh, M., & Park, H.M. (2011). Preprocessing of independent vector analysis using feed-forward network for robust speech recognition. Proc. Neural Information Processing Conf., Granada, Spain, December 12–17.Google Scholar
Pavel, B. (2002). Survey of Clustering Data Mining Techniques. San Jose, CA: Accrue Software.Google Scholar
Peng, J.M., & Lin, Z. (1999). A non-interior continuation method for generalized linear complementarity problems. Mathematical Programming 86(3), 533563.Google Scholar
Prudent, Y., & Ennaji, A. (2005). An incremental growing neural gas learns topologies. Proc. IEEE Int. Joint Conf. Neural Networks, IJCNN'05, San Jose, CA, July 31–August 5.Google Scholar
Rougier, N., & Boniface, Y. (2011). Dynamic self-organising map. Neurocomputing 74(11), 18401847.Google Scholar
Schaal, S., & Atkeson, C.G. (1998). Constructive incremental learning from only local information. Neural Computation 10(8), 20472084.Google Scholar
Shahnazar, S., & Hasan, N. (2014). Gas hydrate formation condition: review on experimental and modeling approaches. Fluid Phase Equilibria 379, 7285.Google Scholar
Shen, F., Yu, H., Sakurai, K., & Hasegawa, O. (2011). An incremental online semi-supervised active learning algorithm based on self-organizing incremental neural network. Neural Computing and Applications 20(7), 10611074.Google Scholar
Tong, X., Qi, L., Wu, F., & Zhou, H. (2010). A smoothing method for solving portfolio optimization with CVaR and applications in allocation of generation asset. Applied Mathematics and Computation 216(6), 17231740.Google Scholar
Ultsch, A., & Siemon, H.P. (1990). Kohonen's self organizing feature maps for exploratory data analysis. Proc. Int. Neural Networks Conf., pp. 305308.Google Scholar
Van der Maaten, L.J., Postma, E.O., & Van den Herik, H.J. (2009). Dimensionality reduction: A comparative review. Journal of Machine Learning Research 10(1–41), 6671.Google Scholar
Vandesompele, J., De Preter, K., Pattyn, F., Poppe, B., Van Roy, N., De Paepe, A., & Speleman, F. (2002). Accurate normalization of real-time quantitative RT-PCR data by geometric averaging of multiple internal control genes. Genome Biology 3(7), research0034.Google Scholar
Werbos, P. (1974). Beyond regression: New tools for prediction and analysis in the behavioral sciences. PhD Thesis. Harvard University.Google Scholar
Zahedi, G., Karami, Z., & Yaghoobi, H. (2009). Prediction of hydrate formation temperature by both statistical models and artificial neural network approaches. Energy Conversion and Management 50(8), 20522059.Google Scholar
Ziegel, E.R. (2002). Statistical inference. Technometrics 44(4), 407408.Google Scholar