Examinando por Autor "Mora, M"
Mostrando 1 - 5 de 5
Resultados por página
Opciones de ordenación
Ítem Classification of Parkinson's disease patients based on spectrogram using local binary pattern descriptors(IOP Publishing, 2022) Gelvez-Almeida, E; Váasquez-Coronel, A; Guatelli, R; Aubin, V; Mora, MExtreme learning machine is an algorithm that has shown a good performance facing classi cation and regression problems. It has gained great acceptance by the scienti c community due to the simplicity of the model and its sola great generalization capacity. This work proposes the use of extreme learning machine neural networks to carry out the classi cation between Parkinson's disease patients and healthy individuals. The descriptor used corresponds to the feature vector generated applying the local binary Pattern algorithm to the grayscale spectrograms. The spectrograms are obtained from the audio signal samples from the considered repository. Experiments are conducted with single hidden layer and multilayer extreme learning machine networks comparing the results of each structure. Results show that hierarchical extreme learning machine with three hidden layers has a better general performance over multilayer extreme learning machine networks and a single hidden layer extreme learning machine. The rate of success obtained is within the ranges presented in the literature. However, the hierarchical network training time is considerably faster compared to multilayer networks of three or two hidden layers.Ítem Conditioning of extreme learning machine for noisy data using heuristic optimization(IOP Publishing, 2020) Salazar, E; Mora, M; Vásquez, A; Gelvez, EThis article provides a tool that can be used in the exact sciences to obtain good approximations to reality when noisy data is inevitable. Two heuristic optimization algorithms are implemented: Simulated Annealing and Particle Swarming for the determination of the extreme learning machine output weights. The first operates in a large search space and at each iteration it probabilistically decides between staying at its current state or moving to another. The swarm of particles, it optimizes a problem from a population of candidate solutions, moving them throughout the search space according to position and speed. The methodology consists of building data sets around a polynomial function, implementing the heuristic algorithms and comparing the errors with the traditional computation method using the Moore–Penrose inverse. The results show that the heuristic optimization algorithms implemented improve the estimation of the output weights when the input have highly noisy data.Ítem Estimation of the optimal number of neurons in extreme learning machine using simulated annealing and the golden section(IOP Publishing, 2023) Gelvez-Almeida, E; Mora, M; Huérfano-Maldonado, Y; Salazar-Jurado, E; Martínez-Jeraldo, N; Lozada-Yavina, R; Baldera-Moreno, Y; Tobar, LExtreme learning machine is a neural network algorithm widely accepted in the scientific community due to the simplicity of the model and its good results in classification and regression problems; digital image processing, medical diagnosis, and signal recognition are some applications in the field of physics addressed with these neural networks. The algorithm must be executed with an adequate number of neurons in the hidden layer to obtain good results. Identifying the appropriate number of neurons in the hidden layer is an open problem in the extreme learning machine field. The search process has a high computational cost if carried out sequentially, given the complexity of the calculations as the number of neurons increases. In this work, we use the search of the golden section and simulated annealing as heuristic methods to calculate the appropriate number of neurons in the hidden layer of an Extreme Learning Machine; for the experiments, three real databases were used for the classification problem and a synthetic database for the regression problem. The results show that the search for the appropriate number of neurons is accelerated up to 4.5× times with simulated annealing and up to 95.7× times with the golden section search compared to a sequential method in the highest-dimensional database.Ítem Extreme learning machine adapted to noise based on optimization algorithms(IOP Publishing, 2020) Vásquez, A; Mora, M; Salazar, E; Gelvez, EThe extreme learning machine for neural networks of feedforward of a single hidden layer randomly assigns the weights of entry and analytically determines the weights the output by means the Moore-Penrose inverse, this algorithm tends to provide an extremely fast learning speed preserving the adjustment levels achieved by classifiers such as multilayer perception and support vector machine. However, the Moore-Penrose inverse loses precision when using data with additive noise in training. That is why in this paper a method to robustness of extreme learning machine to additive noise proposed. The method consists in computing the weights of the output layer using non-linear optimization algorithms without restrictions. Tests are performed with the gradient descent optimization algorithm and with the Levenberg-Marquardt algorithm. From the implementation it is observed that through the use of these algorithms, smaller errors are achieved than those obtained with the Moore-Penrose inverse.Ítem Parallel methods for linear systems solution in extreme learning machines: an overview(2020) Gelvez-Almeida, E; Baldera-Moreno, Y; Huérfano, Y; Vera, M; Mora, M; Barrientos, RThis paper aims to present an updated review of parallel algorithms for solving square and rectangular single and double precision matrix linear systems using multi-core central processing units and graphic processing units. A brief description of the methods for the solution of linear systems based on operations, factorization and iterations was made. The methodology implemented, in this article, is a documentary and it was based on the review of about 17 papers reported in the literature during the last five years (2016-2020). The disclosed findings demonstrate the potential of parallelism to significantly decrease extreme learning machines training times for problems with large amounts of data given the calculation of the Moore Penrose pseudo inverse. The implementation of parallel algorithms in the calculation of the pseudo-inverse will allow to contribute significantly in the applications of diversifying areas, since it can accelerate the training time of the extreme learning machines with optimal results.