Parallel methods for linear systems solution in extreme learning machines: an overview
datacite.rights | http://purl.org/coar/access_right/c_abf2 | eng |
dc.contributor.author | Gelvez-Almeida, E | |
dc.contributor.author | Baldera-Moreno, Y | |
dc.contributor.author | Huérfano, Y | |
dc.contributor.author | Vera, M | |
dc.contributor.author | Mora, M | |
dc.contributor.author | Barrientos, R | |
dc.date.accessioned | 2021-10-26T22:48:27Z | |
dc.date.available | 2021-10-26T22:48:27Z | |
dc.date.issued | 2020 | |
dc.description.abstract | This paper aims to present an updated review of parallel algorithms for solving square and rectangular single and double precision matrix linear systems using multi-core central processing units and graphic processing units. A brief description of the methods for the solution of linear systems based on operations, factorization and iterations was made. The methodology implemented, in this article, is a documentary and it was based on the review of about 17 papers reported in the literature during the last five years (2016-2020). The disclosed findings demonstrate the potential of parallelism to significantly decrease extreme learning machines training times for problems with large amounts of data given the calculation of the Moore Penrose pseudo inverse. The implementation of parallel algorithms in the calculation of the pseudo-inverse will allow to contribute significantly in the applications of diversifying areas, since it can accelerate the training time of the extreme learning machines with optimal results. | eng |
dc.format.mimetype | spa | |
dc.identifier.citation | IOP Publishing | eng |
dc.identifier.doi | https://doi.org/10.1088/1742-6596/1702/1/012017 | |
dc.identifier.issn | 17426596 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12442/8802 | |
dc.language.iso | eng | eng |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | eng |
dc.rights.accessrights | info:eu-repo/semantics/openAccess | eng |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
dc.source | Journal of Physics: Conference Series | eng |
dc.source | Vol. 1702 (2020) | |
dc.subject | Multilayer perceptron | eng |
dc.subject | Support vector machines | eng |
dc.subject | Algorithms | eng |
dc.subject | Moore-Penrose | eng |
dc.title | Parallel methods for linear systems solution in extreme learning machines: an overview | eng |
dc.type.driver | info:eu-repo/semantics/article | eng |
dc.type.spa | Artículo científico | spa |
dcterms.references | Te invitamos a seguirnos a través de nuestras redes sociales instagram: @bibliounisimon y Facebook: sistema de Bibliotecas - Universidad Simón Bolívar, de esta manera podrá mantenerse informado de todos nuestros servicios y las novedades relacionadas con Biblioteca Estimado (a) usuario (a) le invitamos a consultar AteneaLab una plataforma diseñada para brindar apoyo a toda la comunidad académica para el fortalecimiento de su formación. Videos, infografías, presentaciones, diagramas, gráficos, calendarios, flujogramas y demás piezas gráficas le ayudarán a proyectar sus ideas de manera profesional y de alta calidad. Ingrese a https://atenealab.unisimon.edu.co y descubre un mundo de herramientas que hemos colocado disponible para ti. Estimado(a) usuario(a), agradecemos realizar la Evaluación del Servicio a través del siguiente enlace: https://n9.cl/whe0z Señor agente: Compartir con los usuarios el enlace anterior para que hagan la respectiva evaluación del servicio. | spa |
dcterms.references | Lin C F and Wang S D 2002 Fuzzy support vector machines IEEE Transactions on Neural Networks 13(2) 464–471 | eng |
dcterms.references | Lu S, Wang X, Zhang G and Zhou X 2015 Effective algorithms of the moore-penrose inverse matrices for extreme learning machine Intelligent Data Analysis 19(4) 743–760 | eng |
dcterms.references | Rauber T and R¨unger G 2013 Performance analysis of parallel programs Parallel Programming (Berlin: Springer) pp 169–226 | eng |
dcterms.references | He Q, Shang T, Zhuang F and Shi Z 2013 Parallel extreme learning machine for regression based on mapreduce Neurocomputing 102 52–58 | eng |
dcterms.references | Alaba P A, Popoola S I, Olatomiwa L, Akanle M B, Ohunakin O S, Adetiba E, Alex O D, Atayero A A and Daud W M A W 2019 Towards a more efficient and cost-sensitive extreme learning machine: A state-of-the-art review of recent trend Neurocomputing 350 70–90 | eng |
dcterms.references | Parkavi R M, Shanthi M and Bhuvaneshwari M C 2017 Recent trends in elm and mlelm : A review Advances in Science, Technology and Engineering Systems Journal 2(1) 69–75 | eng |
dcterms.references | Lyche T 2020 Numerical Linear Algebra and Matrix Factorizations (Oslo: Springer) | eng |
dcterms.references | Hornik K, Stinchcombe M, White H et al. 1989 Multilayer feedforward networks are universal approximators. Neural networks 2(5) 359–366 | eng |
dcterms.references | Huang G B, Zhu Q Y and Siew C K 2004 Extreme learning machine: a new learning scheme of feedforward neural networks IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541) (Budapest: IEEE) | eng |
dcterms.references | Salazar E, Mora M, V´asquez A and Gelvez E 2020 Conditioning of extreme learning machine for noisy data using heuristic optimization Journal of Physics: Conference Series 1514 012007:1 | eng |
dcterms.references | Tang J, Deng C, Huang G 2016 Extreme learning machine for multilayer perceptron IEEE Transactions on Neural Networks and Learning Systems 27(4) 809–821 | eng |
dcterms.references | Kasun L, Zhou H, Huang G and Chi M 2013 Representational learning with elms for big data. intell. syst IEEE Intelligent Systems 28(6) 31–34 | eng |
dcterms.references | Yi H B, Nie Z and Li B 2018 Efficient implementations of gaussian elimination in finite fields on asics for mq cryptographic systems Journal of Discrete Mathematical Sciences and Cryptography 21(3) 797–802 | eng |
dcterms.references | Pan V Y and Zhao L 2017 Numerically safe gaussian elimination with no pivoting Linear Algebra and its Applications 527 349–383 | eng |
dcterms.references | Abouelfarag A A, Nouh N M, ElShenawy M 2016 Scalable parallel approach for dense linear algebra International Conference on High Performance Computing & Simulation (HPCS) (Innsbruck: IEEE) | eng |
dcterms.references | Liu Y, Xiong R and Xiao Y 2016 A MPI+ OpenMP+ CUDA hybrid parallel scheme for MT occam inversion International Journal of Grid and Distributed Computing 9(9) 67–82 | eng |
dcterms.references | Dumas J G, Gautier T, Pernet C, Roch J L, Sultan Z 2016 Recursion based parallelization of exact dense linear algebra routines for gaussian elimination Parallel Computing 57 235–249 | eng |
dcterms.references | Zhang S, Baharlouei E and Wu P 2020 High accuracy matrix computations on neural engines: A study of qr factorization and its applications Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing (New York: Association for Computing Machinery) pp 17–28 | eng |
dcterms.references | Lu Y, Yamazaki I, Ino F, Matsushita Y, Tomov S and Dongarra J 2020 Reducing the amount of out-of-core data access for gpu-accelerated randomized svd Concurrency and Computation: Practice and Experience 32(19) e5754 | eng |
dcterms.references | Tom´as A E, Rodr´ıguez-S´anchez R, Catal´an S, Carratal´a-S´aez R, Quintana-Ort´ı E S 2019 Dynamic look-ahead in the reduction to band form for the singular value decomposition Parallel Computing 81 22–31 | eng |
dcterms.references | Wu R 2019 Dynamic scheduling strategy for block parallel cholesky factorization based on activity on edge network IEEE Access 7 66317–66324 | eng |
dcterms.references | Wu R 2018 A heterogeneous parallel cholesky block factorization algorithm IEEE Access 6 14071–14077 | eng |
dcterms.references | Tapia-Romero M, Meneses-Viveros A, Hern´andez-Rubio E 2020 Parallel qr factorization using givens rotations in mpi-cuda for multi-gpu (IJACSA) International Journal of Advanced Computer Science and Applications 11(5) 636–645 | eng |
dcterms.references | Islam M S and Wang Q 2020 Hierarchical jacobi iteration for structured matrices on gpus using shared memory arXiv 2006.16465 1 | eng |
dcterms.references | Yang X, Wang X, Sheng J, Li Y and Luo P 2018 Parallelization and performance optimization of the jacobi stencil algorithm International Conference on Sensing,Diagnostics, Prognostics, and Control (SDPC) (Xi’an: IEEE) | eng |
dcterms.references | Aslam M, Riaz O, Mumtaz S and Asif A D 2020 Performance comparison of gpu-based jacobi solvers using cuda provided synchronization methods IEEE Access 8 31792–31812 | eng |
dcterms.references | Naik T U and Guinde N 2017 Implementing the gauss seidel algorithm for solving eigenvalues of symmetric matrices with cuda International Conference on Computing Methodologies and Communication (ICCMC) (Erode: IEEE) pp 922–925 | eng |
dcterms.references | Wu Z, Xue Y, You X and Zhang C 2017 Hardware efficient detection for massive mimo uplink with parallel gauss-seidel method 22nd International Conference on Digital Signal Processing (DSP) (London: IEEE) | eng |
dcterms.references | Huang G H, Xu Y Z, Yi X W, Xia M, Jiao Y Y and Zhang S 2020 Highly efficient iterative methods for solving linear equations of three-dimensional sphere discontinuous deformation analysis International Journal for Numerical Analytical Methods Geomechanics 44(9) 1301–1314 | eng |
oaire.version | info:eu-repo/semantics/publishedVersion | eng |