ERNEAD: Training of Artificial Neural Networks Based on a Genetic Algorithm and Finite Automata Theory

dc.contributor.authorRuiz-Rangel, Jonathan
dc.contributor.authorArdila Hernandez, Carlos Julio
dc.contributor.authorMaradei Gonzalez, Luis
dc.contributor.authorJabba Molinares, Daladier
dc.date.accessioned2018-03-13T16:08:10Z
dc.date.available2018-03-13T16:08:10Z
dc.date.issued2018
dc.description.abstractThis paper presents a variation in the algorithm EMODS (Evolutionary Metaheuristic of Deterministic Swapping), at the level of its mutation stage in order to train algorithms for each problem. It should be noted that the EMODS metaheuristic is a novel framework that allows multi-objective optimization of combinatorial problems. The proposal for the training of neural networks will be named ERNEAD (training of Evolutionary Neural Networks through Evolutionary Strategies and Finite Automata). The selection process consists of five phases: the initial population generation phase, the forward feeding phase of the network, the EMODS search phase, the crossing and evaluation phase, and finally the verification phase. The application of the process in the neural networks will generate sets of networks with optimal weights for a particular problem. ERNEAD algorithm was applied to two typical problems: breast cancer and flower classification, the solution of the problems were compared with solutions obtained by applying the classical Backpropagation, Conjugate Gradient and Levenberg-Marquardt algorithms. The analysis of the results indicated that ERNEAD produced more precise solutions than the ones thrown by the classic algorithms.eng
dc.identifier.issn09740635
dc.identifier.urihttp://hdl.handle.net/20.500.12442/1863
dc.language.isoengspa
dc.publisherEditorial Boardspa
dc.rights.accessrightsinfo:eu-repo/semantics/restrictedAccess
dc.rights.licenseLicencia de Creative Commons Reconocimiento-NoComercial-CompartirIgual 4.0 Internacionalspa
dc.sourceInternational Journal of Artificial Intelligenceeng
dc.sourceVol. 16, No.1 (2018)eng
dc.source.urihttp://www.ceser.in/ceserp/index.php/ijai/article/view/5456spa
dc.subjectFinite Deterministic Automatoneng
dc.subjectArtificial Neural Networkseng
dc.subjectGenetic Algorithmeng
dc.subjectEMODSeng
dc.subjectBackpropagation Algorithmeng
dc.subjectConjugate Gradient Algorithmeng
dc.subjectLevenberg-Marquardt Algorithmeng
dc.titleERNEAD: Training of Artificial Neural Networks Based on a Genetic Algorithm and Finite Automata Theoryeng
dc.typearticlespa
dcterms.referencesBrownlee, J. 2011. Clever Algorithms. Nature Inspired Programming Recipes, Vol. 5 of Machine Learning, Prentice Hall, Swinburne University, Melbourne, Australia.eng
dcterms.referencesCardie, C. 1993. Using decision trees to improve case-based learning, Proceedings of the Tenth International Conference on Machine Learning 1993, Morgan Kaufmann X(10): 25–32.eng
dcterms.referencesCenter for Machine Learning and Intelligent Systems 23 de Agosto de 2013. http://cml.ics.uci.edu/.eng
dcterms.referencesCirstea, M. N., Dinu, A., Khor, J. and Malcolm, M. 2002. Neural and Fuzzy Logic Control of Drives and Power Systems, Vol. I of Elsevier Science Linacre House, Newnes, Oxford OX2 8DP 225 Wildwood Avenue, Woburn, USA.eng
dcterms.referencesDuda, R. O., Hart, P. E. and Nilsson, N. J. 1976. Subjective bayesian methods for rule-based inference systems, National computer conference and exposition SRI International 333 76(333): 1075–1082.eng
dcterms.referencesEngelbrecht, A. P. 2002. Computational Intelligence An Introduction, Vol. 1 of Computer Science, John Wiley and Sons Ltd, University of Pretoria, Pretoria, South Africa.eng
dcterms.referencesFloreano, D. and Mattiussi, C. 2008. Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies, Vol. I of Intelligent Robotics and Autonomous Agents, The MIT Press, One Rogers Street Cambridge MA 02142-1209, USA.eng
dcterms.referencesGutiérrez Peña, P. A. 2009. Nuevos modelos de redes neuronales evolutivas y regresión logística generalizada utilizando funciones de base. aplicaciones.eng
dcterms.referencesGuzmán, L., G ́ omez, A., Ardila Hern ́ andez, C. J. and Jabba Molinares, D. 2013. Adaptation of ́ the grasp algorithm to solve a multiobjective problem using the pareto concept, International Journal of Artificial Intelligence 11(A13): 222–236.eng
dcterms.referencesHaykin, S. 1999. Neural networks. A comprehensive foundation, Vol. 2 of Computer Science, Prentice Hall, McMaster University Hamilton, Ontario, Canada.eng
dcterms.referencesHestenes, M. R. and Stiefe, E. 1952. Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards 49(6): 409–436.eng
dcterms.referencesKasabov, N. K. 1998. Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering, Vol. I of Computational Intelligence, MIT Press, Institute of Technology Massachusetts London, England.eng
dcterms.referencesLenat, D. B. 1976. Am: An artificial intelligence approach to discovery in mathematics as heuristic search.eng
dcterms.referencesMarquardt, D. 1963. An algorithm for least-squares estimation of nonlinear parameters, SIAM Journal on Applied Mathematics 11(2): 431–441.eng
dcterms.referencesMcNelis, P. D. 2005. Neural networks in finance : Gaining predictive edge in the market, Vol. 30 of Academic Press Advanced Finance, Academic Press Edicin: New., Elsevier Academic Press 30 Corporate Drive, Suite 400, Burlington, MA01803, USA.eng
dcterms.referencesMedina Alfonzo, E. L. 2011. Hibridizacion de lógica difusa y algoritmos genéticos en la predicción de registros de velocidad de ondas. campo guafita.spa
dcterms.referencesNiño Ruiz, E. D. 2011. Evolutionary algorithms based on the automata theory for the multi- objective optimization of combinatorial problems, Real-World Applications of Genetic Algorithms I(1): 81–108.eng
dcterms.referencesNiño Ruiz, E. D. 2012. Samods and sagamods: Novel algorithms based on the automata theory for the multiobjective optimization of combinatorial problems, International Journal Artificial Intelligence 8(S12): 147–165.eng
dcterms.referencesNiño Ruiz, E. D., Ardila Hernández, C. J., Jabba Molinares, D., Barrios Sarmiento, A. and Donoso Meisel, Y. 2010. Mods: A novel metaheuristic of deterministic swapping for the multi-objective optimization of combinatorials problems, Computer Technology and Application vol. 2 2(4): 280–292.eng
dcterms.referencesNieto Parra, H. 2011. Diseño e implementacióon de una metaheurística hibrida basada en recocido simulado, algoritmos geneticos y teoría de autómatas para la optimización bi- objetivo de problemas combinatorios.spa
dcterms.referencesRuiz, R. and Maroto, C. 2005. A genetic algorithm for hybrid flowshops with sequence dependent setup times and machine ligibility, European Journal of Operational Research 169(3): 781– 800.eng
dcterms.referencesRuiz-Rangel, J. R. 2011. Entrenamiento de redes neuronales artificiales basado en algoritmo evolutivo y teoría de autómatas finitos. Rumelhart, D. E., Hinton, G. E. and Williams, R. J. 1986. Learning internal representations by error propagation, Parallel distributed processing: explorations in the microstructure of cognition 1(1): 318–362.spa
dcterms.referencesSamarasinghe, S. 2007. Neural Networks for Applied Sciences and Engineering. From Funda- mental to Complex Pattern Recognition, Vol. 1 of Computer Science, Aurbach Publications- Taylor and Francis Group, New York, USA.eng
dcterms.referencesSierra Araujo, B. 2006. Aprendizaje Automatico: Conceptos básicos utilizados y avanzados. Aspectos prácticos utilizados el software WEKA , Vol. 1 of Computer Science, Pearson Ed- ucation, Departamento de Ciencias de la computación e inteligencia artificial, universidad del País Vasco, España.eng
dcterms.referencesTaylor, B. J. 2006. Methods and Procedures for the Verification and Validation of Artificial Neural Networks, Vol. 2005933711 of Computer Science, Springer, Institute for Scientific Research, Inc, Fairmont, WV, USA.eng
dcterms.referencesTwomey, J. M. and Smith, A. E. 1995. Performance measures, consistency, and power for artificial neural network models, Math. Comput. Modelling 21(1/2): 243–258.eng
dcterms.referencesWagman, M. 2000. Scientific Discovery Processes in Humans and Computers: Theory and Research in Psychology and Artificial Intelligence, Vol. CT of Greenwood Publishing Group Inc. Westport, Praeger, Westport,USA.eng
dcterms.referencesYao, X. 1999. Evolving artificial neural networks, Proceedings of the IEEE 87(9): 1423–1447.eng

Archivos

Bloque de licencias
Mostrando 1 - 1 de 1
No hay miniatura disponible
Nombre:
license.txt
Tamaño:
1.71 KB
Formato:
Item-specific license agreed upon to submission
Descripción:

Colecciones