Mostrar el registro sencillo del ítem

dc.rights.licenseLicencia de Creative Commons Reconocimiento-NoComercial-CompartirIgual 4.0 Internacionales
dc.contributor.authorRuiz-Rangel, Jonathan
dc.contributor.authorArdila Hernandez, Carlos Julio
dc.contributor.authorMaradei Gonzalez, Luis
dc.contributor.authorJabba Molinares, Daladier
dc.date.accessioned2018-03-13T16:08:10Z
dc.date.available2018-03-13T16:08:10Z
dc.date.issued2018
dc.identifier.issn09740635
dc.identifier.urihttp://hdl.handle.net/20.500.12442/1863
dc.description.abstractThis paper presents a variation in the algorithm EMODS (Evolutionary Metaheuristic of Deterministic Swapping), at the level of its mutation stage in order to train algorithms for each problem. It should be noted that the EMODS metaheuristic is a novel framework that allows multi-objective optimization of combinatorial problems. The proposal for the training of neural networks will be named ERNEAD (training of Evolutionary Neural Networks through Evolutionary Strategies and Finite Automata). The selection process consists of five phases: the initial population generation phase, the forward feeding phase of the network, the EMODS search phase, the crossing and evaluation phase, and finally the verification phase. The application of the process in the neural networks will generate sets of networks with optimal weights for a particular problem. ERNEAD algorithm was applied to two typical problems: breast cancer and flower classification, the solution of the problems were compared with solutions obtained by applying the classical Backpropagation, Conjugate Gradient and Levenberg-Marquardt algorithms. The analysis of the results indicated that ERNEAD produced more precise solutions than the ones thrown by the classic algorithms.en
dc.language.isoenes
dc.publisherEditorial Boardes
dc.rightsinfo:eu-repo/semantics/restrictedAccess
dc.sourceInternational Journal of Artificial Intelligenceen
dc.sourceVol. 16, No.1 (2018)en
dc.source.urihttp://www.ceser.in/ceserp/index.php/ijai/article/view/5456es
dc.subjectFinite Deterministic Automatonen
dc.subjectArtificial Neural Networksen
dc.subjectGenetic Algorithmen
dc.subjectEMODSen
dc.subjectBackpropagation Algorithmen
dc.subjectConjugate Gradient Algorithmen
dc.subjectLevenberg-Marquardt Algorithmen
dc.titleERNEAD: Training of Artificial Neural Networks Based on a Genetic Algorithm and Finite Automata Theoryen
dc.typeArticlees
dcterms.bibliographicCitationBrownlee, J. 2011. Clever Algorithms. Nature Inspired Programming Recipes, Vol. 5 of Machine Learning, Prentice Hall, Swinburne University, Melbourne, Australia.en
dcterms.bibliographicCitationCardie, C. 1993. Using decision trees to improve case-based learning, Proceedings of the Tenth International Conference on Machine Learning 1993, Morgan Kaufmann X(10): 25–32.en
dcterms.bibliographicCitationCenter for Machine Learning and Intelligent Systems 23 de Agosto de 2013. http://cml.ics.uci.edu/.en
dcterms.bibliographicCitationCirstea, M. N., Dinu, A., Khor, J. and Malcolm, M. 2002. Neural and Fuzzy Logic Control of Drives and Power Systems, Vol. I of Elsevier Science Linacre House, Newnes, Oxford OX2 8DP 225 Wildwood Avenue, Woburn, USA.en
dcterms.bibliographicCitationDuda, R. O., Hart, P. E. and Nilsson, N. J. 1976. Subjective bayesian methods for rule-based inference systems, National computer conference and exposition SRI International 333 76(333): 1075–1082.en
dcterms.bibliographicCitationEngelbrecht, A. P. 2002. Computational Intelligence An Introduction, Vol. 1 of Computer Science, John Wiley and Sons Ltd, University of Pretoria, Pretoria, South Africa.en
dcterms.bibliographicCitationFloreano, D. and Mattiussi, C. 2008. Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies, Vol. I of Intelligent Robotics and Autonomous Agents, The MIT Press, One Rogers Street Cambridge MA 02142-1209, USA.en
dcterms.bibliographicCitationGutiérrez Peña, P. A. 2009. Nuevos modelos de redes neuronales evolutivas y regresión logística generalizada utilizando funciones de base. aplicaciones.en
dcterms.bibliographicCitationGuzmán, L., G ́ omez, A., Ardila Hern ́ andez, C. J. and Jabba Molinares, D. 2013. Adaptation of ́ the grasp algorithm to solve a multiobjective problem using the pareto concept, International Journal of Artificial Intelligence 11(A13): 222–236.en
dcterms.bibliographicCitationHaykin, S. 1999. Neural networks. A comprehensive foundation, Vol. 2 of Computer Science, Prentice Hall, McMaster University Hamilton, Ontario, Canada.en
dcterms.bibliographicCitationHestenes, M. R. and Stiefe, E. 1952. Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards 49(6): 409–436.en
dcterms.bibliographicCitationKasabov, N. K. 1998. Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering, Vol. I of Computational Intelligence, MIT Press, Institute of Technology Massachusetts London, England.en
dcterms.bibliographicCitationLenat, D. B. 1976. Am: An artificial intelligence approach to discovery in mathematics as heuristic search.en
dcterms.bibliographicCitationMarquardt, D. 1963. An algorithm for least-squares estimation of nonlinear parameters, SIAM Journal on Applied Mathematics 11(2): 431–441.en
dcterms.bibliographicCitationMcNelis, P. D. 2005. Neural networks in finance : Gaining predictive edge in the market, Vol. 30 of Academic Press Advanced Finance, Academic Press Edicin: New., Elsevier Academic Press 30 Corporate Drive, Suite 400, Burlington, MA01803, USA.en
dcterms.bibliographicCitationMedina Alfonzo, E. L. 2011. Hibridizacion de lógica difusa y algoritmos genéticos en la predicción de registros de velocidad de ondas. campo guafita.es
dcterms.bibliographicCitationNiño Ruiz, E. D. 2011. Evolutionary algorithms based on the automata theory for the multi- objective optimization of combinatorial problems, Real-World Applications of Genetic Algorithms I(1): 81–108.en
dcterms.bibliographicCitationNiño Ruiz, E. D. 2012. Samods and sagamods: Novel algorithms based on the automata theory for the multiobjective optimization of combinatorial problems, International Journal Artificial Intelligence 8(S12): 147–165.en
dcterms.bibliographicCitationNiño Ruiz, E. D., Ardila Hernández, C. J., Jabba Molinares, D., Barrios Sarmiento, A. and Donoso Meisel, Y. 2010. Mods: A novel metaheuristic of deterministic swapping for the multi-objective optimization of combinatorials problems, Computer Technology and Application vol. 2 2(4): 280–292.en
dcterms.bibliographicCitationNieto Parra, H. 2011. Diseño e implementacióon de una metaheurística hibrida basada en recocido simulado, algoritmos geneticos y teoría de autómatas para la optimización bi- objetivo de problemas combinatorios.es
dcterms.bibliographicCitationRuiz, R. and Maroto, C. 2005. A genetic algorithm for hybrid flowshops with sequence dependent setup times and machine ligibility, European Journal of Operational Research 169(3): 781– 800.en
dcterms.bibliographicCitationRuiz-Rangel, J. R. 2011. Entrenamiento de redes neuronales artificiales basado en algoritmo evolutivo y teoría de autómatas finitos. Rumelhart, D. E., Hinton, G. E. and Williams, R. J. 1986. Learning internal representations by error propagation, Parallel distributed processing: explorations in the microstructure of cognition 1(1): 318–362.es
dcterms.bibliographicCitationSamarasinghe, S. 2007. Neural Networks for Applied Sciences and Engineering. From Funda- mental to Complex Pattern Recognition, Vol. 1 of Computer Science, Aurbach Publications- Taylor and Francis Group, New York, USA.en
dcterms.bibliographicCitationSierra Araujo, B. 2006. Aprendizaje Automatico: Conceptos básicos utilizados y avanzados. Aspectos prácticos utilizados el software WEKA , Vol. 1 of Computer Science, Pearson Ed- ucation, Departamento de Ciencias de la computación e inteligencia artificial, universidad del País Vasco, España.en
dcterms.bibliographicCitationTaylor, B. J. 2006. Methods and Procedures for the Verification and Validation of Artificial Neural Networks, Vol. 2005933711 of Computer Science, Springer, Institute for Scientific Research, Inc, Fairmont, WV, USA.en
dcterms.bibliographicCitationTwomey, J. M. and Smith, A. E. 1995. Performance measures, consistency, and power for artificial neural network models, Math. Comput. Modelling 21(1/2): 243–258.en
dcterms.bibliographicCitationWagman, M. 2000. Scientific Discovery Processes in Humans and Computers: Theory and Research in Psychology and Artificial Intelligence, Vol. CT of Greenwood Publishing Group Inc. Westport, Praeger, Westport,USA.en
dcterms.bibliographicCitationYao, X. 1999. Evolving artificial neural networks, Proceedings of the IEEE 87(9): 1423–1447.en


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

  • Artículos
    Artículos científicos evaluados por pares

Mostrar el registro sencillo del ítem