A Review on Large-Scale Data Processing with Parallel and Distributed Randomized Extreme Learning Machine Neural Networks
datacite.rights | http://purl.org/coar/access_right/c_abf2 | |
dc.contributor.author | Gelvez-Almeida, Elkin | |
dc.contributor.author | Mora, Marco | |
dc.contributor.author | Barrientos, Ricardo | |
dc.contributor.author | Hernández García, Ruber | |
dc.contributor.author | Vilches, Karina | |
dc.contributor.author | Vera, Miguel | |
dc.date.accessioned | 2025-02-04T18:20:10Z | |
dc.date.available | 2025-02-04T18:20:10Z | |
dc.date.issued | 2024 | |
dc.description.abstract | The randomization-based feedforward neural network has raised great interest in the scientific community due to its simplicity, training speed, and accuracy comparable to traditional learning algorithms. The basic algorithm consists of randomly determining the weights and biases of the hidden layer and analytically calculating the weights of the output layer by solving a linear overdetermined system using the Moore–Penrose generalized inverse. When processing large volumes of data, randomization-based feedforward neural network models consume large amounts of memory and drastically increase training time. To efficiently solve the above problems, parallel and distributed models have recently been proposed. Previous reviews of randomization-based feedforward neural network models have mainly focused on categorizing and describing the evolution of the algorithms presented in the literature. The main contribution of this paper is to approach the topic from the perspective of the handling of large volumes of data. In this sense, we present a current and extensive review of the parallel and distributed models of randomized feedforward neural networks, focusing on extreme learning machine. In particular, we review the mathematical foundations (Moore–Penrose generalized inverse and solution of linear systems using parallel and distributed methods) and hardware and software technologies considered in current implementations. | spa |
dc.format.mimetype | ||
dc.identifier.citation | Gelvez-Almeida, E.; Mora, M.; Barrintos, R.J.; Hernández-García, R; Vilches-Ponce, K.; Vera, M. A Review on Large-Scale Data Processing with Parallel and Distributed Randomized Extreme Learning Machine Neural Networks. Math. Comput. Appl. 2024, 29, 40. https://doi.org/10.3390/mca29030040 | eng |
dc.identifier.doi | https://doi.org/10.3390/mca29030040 | |
dc.identifier.issn | 22978747 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12442/16208 | |
dc.language.iso | eng | |
dc.publisher | MDPI | spa |
dc.rights.accessrights | info:eu-repo/semantics/openAccess | |
dc.source | Mathematical and Computational Applications | eng |
dc.source | Vol. 29, Issue 3 (2024) | spa |
dc.subject.keywords | Randomization-Based Feedforward Neural Network | eng |
dc.subject.keywords | Extreme Learning Machine | eng |
dc.subject.keywords | Moore–Penrose generalized inverse matrix | eng |
dc.subject.keywords | Parallel and distributed computing | eng |
dc.title | A Review on Large-Scale Data Processing with Parallel and Distributed Randomized Extreme Learning Machine Neural Networks | eng |
dc.type.driver | info:eu-repo/semantics/article | |
dc.type.spa | Artículo científico | |
dcterms.references | Schmidt, W.F.; Kraaijveld, M.A.; Duin, R.P. Feed forward neural networks with random weights. In Proceedings of the 11th IAPR International Conference on Pattern Recognition. Vol. II. Conference B: Pattern Recognition Methodology and Systems, The Hague, The Netherlands, 30 August–3 September 1992; IEEE: Piscataway, NJ, USA, 1992; pp. 1–4. | eng |
dcterms.references | Pao, Y.H.; Takefuji, Y. Functional-link net computing: theory, system architecture, and functionalities. Computer 1992, 25, 76–79. | eng |
dcterms.references | Pao, Y.H.; Park, G.H.; Sobajic, D.J. Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 1994, 6, 163–180 | eng |
dcterms.references | Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: A new learning scheme of feedforward neural networks. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), Budapest, Hungary, 25–29 July 2004; IEEE: Piscataway, NJ, USA, 2004; pp. 985–990 | eng |
dcterms.references | Huang, G.B.; Wang, D.H.; Lan, Y. Extreme learning machines: A survey. Int. J. Mach. Learn. Cybern. 2011, 2, 107–122. | eng |
dcterms.references | Ahmadi, M.; Soofiabadi, M.; Nikpour, M.; Naderi, H.; Abdullah, L.; Arandian, B. Developing a deep neural network with fuzzy wavelets and integrating an inline PSO to predict energy consumption patterns in urban buildings. Mathematics 2022, 10, 1270. | eng |
dcterms.references | Sharifi, A.; Ahmadi, M.; Mehni, M.A.; Ghoushchi, S.J.; Pourasad, Y. Experimental and numerical diagnosis of fatigue foot using convolutional neural network. Comput. Methods Biomech. Biomed. Eng. 2021, 24, 1828–1840. | eng |
dcterms.references | Ahmadi, M.; Ahangar, F.D.; Astaraki, N.; Abbasi, M.; Babaei, B. FWNNet: presentation of a new classifier of brain tumor diagnosis based on fuzzy logic and the wavelet-based neural network using machine-learning methods. Comput. Intell. Neurosci. 2021, 2021, 8542637 | eng |
dcterms.references | Nomani, A.; Ansari, Y.; Nasirpour, M.H.; Masoumian, A.; Pour, E.S.; Valizadeh, A. PSOWNNs-CNN: A Computational Radiology for Breast Cancer Diagnosis Improvement Based on Image Processing Using Machine Learning Methods. Comput. Intell. Neurosci. 2022, 2022, 5667264. | eng |
dcterms.references | Zangeneh Soroush, M.; Tahvilian, P.; Nasirpour, M.H.; Maghooli, K.; Sadeghniiat-Haghighi, K.; Vahid Harandi, S.; Abdollahi, Z.; Ghazizadeh, A.; Jafarnia Dabanloo, N. EEG artifact removal using sub-space decomposition, nonlinear dynamics, stationary wavelet transform and machine learning algorithms. Front. Physiol. 2022, 13, 1572. | eng |
dcterms.references | Huérfano-Maldonado, Y.; Mora, M.; Vilches, K.; Hernández-García, R.; Gutiérrez, R.; Vera, M. A comprehensive review of extreme learning machine on medical imaging. Neurocomputing 2023, 556, 126618. [ | eng |
dcterms.references | Patil, H.; Sharma, K. Extreme learning machine: A comprehensive survey of theories & algorithms. In Proceedings of the 2023 International Conference on Computational Intelligence and Sustainable Engineering Solutions (CISES), Greater Noida, India, 28–30 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 749–756. | eng |
dcterms.references | Kaur, R.; Roul, R.K.; Batra, S. Multilayer extreme learning machine: a systematic review. Multimed. Tools Appl. 2023. | eng |
dcterms.references | Vásquez-Coronel, J.A.; Mora, M.; Vilches, K. A Review of multilayer extreme learning machine neural networks. Artif. Intell. Rev. 2023, 56, 13691–13742. | eng |
dcterms.references | Wang, J.; Lu, S.; Wang, S.H.; Zhang, Y.D. A review on extreme learning machine. Multimed. Tools Appl. 2022, 81, 41611–41660. | eng |
dcterms.references | Martínez, D.; Zabala-Blanco, D.; Ahumada-García, R.; Azurdia-Meza, C.A.; Flores-Calero, M.; Palacios-Jativa, P. Review of extreme learning machines for the identification and classification of fingerprint databases. In Proceedings of the 2022 IEEE Colombian Conference on Communications and Computing (COLCOM), Cali, Colombia, 27–29 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. | eng |
dcterms.references | Kaur, M.; Das, D.; Mishra, S.P. Survey and evaluation of extreme learning machine on TF-IDF feature for sentiment analysis. In Proceedings of the 2022 International Conference on Machine Learning, Computer Systems and Security (MLCSS), Bhubaneswar, India, 5–6 August 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 247–252. | eng |
dcterms.references | Nilesh, R.; Sunil, W. Review of Optimization in Improving Extreme Learning Machine. EAI Endorsed Trans. Ind. Netw. Intell. Syst. 2021, 8, e2. | eng |
dcterms.references | Mujal, P.; Martínez-Peña, R.; Nokkala, J.; García-Beni, J.; Giorgi, G.L.; Soriano, M.C.; Zambrini, R. Opportunities in quantum reservoir computing and extreme learning machines. Adv. Quantum Technol. 2021, 4, 2100027. | eng |
dcterms.references | Nilesh, R.; Sunil, W. Improving extreme learning machine through optimization a review. In Proceedings of the 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 19–20 March 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 906–912. | eng |
dcterms.references | Rodrigues, I.R.; da Silva Neto, S.R.; Kelner, J.; Sadok, D.; Endo, P.T. Convolutional Extreme Learning Machines: A Systematic Review. Informatics 2021, 8, 33. | eng |
dcterms.references | Saldaña-Olivas, E.; Huamán-Tuesta, J.R. Extreme learning machine for business sales forecasts: A systematic review. In Smart Innovation, Systems and Technologies, Proceedings of the 5th Brazilian Technology Symposium (BTSym 2019), Campinas, Brazil, 22–24 October 2019; Iano, Y., Arthur, R., Saotome, O., Kemper, G., Padilha França, R., Eds.; Springer: Sao Paulo, Barzil, 2021; pp. 87–96. | eng |
dcterms.references | Wang, Z.; Luo, Y.; Xin, J.; Zhang, H.; Qu, L.; Wang, Z.; Yao, Y.; Zhu, W.; Wang, X. Computer-Aided Diagnosis Based on Extreme Learning Machine: A Review. IEEE Access 2020, 8, 141657–141673. | eng |
dcterms.references | Wang, Z.; Sui, L.; Xin, J.; Qu, L.; Yao, Y. A Survey of Distributed and Parallel Extreme Learning Machine for Big Data. IEEE Access 2020, 8, 201247–201258. | eng |
dcterms.references | Alaba, P.A.; Popoola, S.I.; Olatomiwa, L.; Akanle, M.B.; Ohunakin, O.S.; Adetiba, E.; Alex, O.D.; Atayero, A.A.; Daud, W.M.A.W. Towards a more efficient and cost-sensitive extreme learning machine: A state-of-the-art review of recent trend. Neurocomputing 2019, 350, 70–90. | eng |
dcterms.references | Yibo, L.; Fang, L.; Qi, C. A Review of the Research on the Prediction Model of Extreme Learning Machine. J. Phys. Conf. Ser. 2019, 1213, 042013. | eng |
dcterms.references | Li, L.; Sun, R.; Cai, S.; Zhao, K.; Zhang, Q. A review of improved extreme learning machine methods for data stream classification. Multimed. Tools Appl. 2019, 78, 33375–33400. | eng |
dcterms.references | Eshtay, M.; Faris, H.; Obeid, N. Metaheuristic-based extreme learning machines: A review of design formulations and applications. Int. J. Mach. Learn. Cybern. 2019, 10, 1543–1561. | eng |
dcterms.references | Ghosh, S.; Mukherjee, H.; Obaidullah, S.M.; Santosh, K.; Das, N.; Roy, K. A survey on extreme learning machine and evolution of its variants. In Proceedings of the Recent Trends in Image Processing and Pattern Recognition. Second International Conference, RTIP2R 2018, Solapur, India, 21–22 December 2018; Santosh, K.C., Hegadi, R.S., Eds.; Springer: Singapore, 2019; Volume 1035, pp. 572–583. | eng |
dcterms.references | Zhang, S.; Tan, W.; Li, Y. A survey of online sequential extreme learning machine. In Proceedings of the 2018 5th International Conference on Control, Decision and Information Technologies (CoDIT), Thessaloniki, Greece, 10–13 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 45–50. | eng |
dcterms.references | Alade, O.A.; Selamat, A.; Sallehuddin, R. A review of advances in extreme learning machine techniques and its applications. In Proceedings of the Recent Trends in Information and Communication Technology, Johor Bahru, Malaysia, 23–24 April 2017; Saeed, F., Gazem, N., Patnaik, S., Saed Balaid, A.S., Mohammed, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 885–895. | eng |
dcterms.references | Salaken, S.M.; Khosravi, A.; Nguyen, T.; Nahavandi, S. Extreme learning machine based transfer learning algorithms: A survey. Neurocomputing 2017, 267, 516–524. | eng |
dcterms.references | Albadra, M.A.A.; Tiun, S. Extreme learning machine: A review. Int. J. Appl. Eng. Res. 2017, 12, 4610–4623. | eng |
dcterms.references | Ali, M.H.; Zolkipli, M.F. Review on hybrid extreme learning machine and genetic algorithm to work as intrusion detection system in cloud computing. ARPN J. Eng. Appl. Sci. 2016, 11, 460–464. | eng |
dcterms.references | Huang, G.; Huang, G.B.; Song, S.; You, K. Trends in extreme learning machines: A review. Neural Netw. 2015, 61, 32–48. | eng |
dcterms.references | Cao, J.; Lin, Z. Extreme Learning Machines on High Dimensional and Large Data Applications: A Survey. Math. Probl. Eng. 2015, 2015, 103796. | eng |
dcterms.references | Ding, S.; Zhao, H.; Zhang, Y.; Xu, X.; Nie, R. Extreme learning machine: Algorithm, theory and applications. Artif. Intell. Rev. 2015, 44, 103–115. | eng |
dcterms.references | Deng, C.; Huang, G.; Xu, J.; Tang, J. Extreme learning machines: New trends and applications. Sci. China Inf. Sci. 2015, 58, 1–16. | eng |
dcterms.references | Ding, S.; Xu, X.; Nie, R. Extreme learning machine and its applications. Neural Comput. Appl. 2014, 25, 549–556 | eng |
dcterms.references | Liang, N.Y.; Huang, G.B.; Saratchandran, P.; Sundararajan, N. A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 2006, 17, 1411–1423. | eng |
dcterms.references | Ali, M.H.; Fadlizolkipi, M.; Firdaus, A.; Khidzir, N.Z. A hybrid particle swarm optimization-extreme learning machine approach for intrusion detection system. In Proceedings of the 2018 IEEE Student Conference on Research and Development (SCOReD), Selangor, Malaysia, 26–28 November 2018; IEEE: Piscataway, NJ, USA, pp. 1–4. | eng |
dcterms.references | Lyche, T. Numerical Linear Algebra and Matrix Factorizations; Springer: Oslo, Norway, 2020; Volume 22. | eng |
dcterms.references | Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. | eng |
dcterms.references | Zhang, L.; Suganthan, P.N. A survey of randomized algorithms for training neural networks. Inf. Sci. 2016, 364–365, 146–155. | eng |
dcterms.references | Suganthan, P.N.; Katuwal, R. On the origins of randomization-based feedforward neural networks. Appl. Soft Comput. 2021, 105, 107239. | eng |
dcterms.references | Malik, A.K.; Gao, R.; Ganaie, M.; Tanveer, M.; Suganthan, P.N. Random vector functional link network: Recent developments, applications, and future directions. Appl. Soft Comput. 2023, 143, 110377. | eng |
dcterms.references | Safaei, A.; Wu, Q.J.; Akilan, T.; Yang, Y. System-on-a-Chip (SoC)-Based Hardware Acceleration for an Online Sequential Extreme Learning Machine (OS-ELM). IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2019, 38, 2127–2138. | eng |
dcterms.references | Grim, L.F.L.; Barajas, J.A.B.; Gradvohl, A.L.S. Implementações paralelas para o algoritmo Online Sequential Extreme Learning Machine aplicado à previsão de material particulado. Rev. Bras. Comput. Apl. 2019, 11, 13–21. | eng |
dcterms.references | Zehai, G.; Cunbao, M.; Jianfeng, Z.; Weijun, X. Remaining useful life prediction of integrated modular avionics using ensemble enhanced online sequential parallel extreme learning machine. Int. J. Mach. Learn. Cybern. 2021, 12, 1893–1911. | eng |
dcterms.references | Polat, Ö.; Kayhan, S.K. GPU-accelerated and mixed norm regularized online extreme learning machine. Concurr. Comput. Pract. Exp. 2022, 34, e6967. | eng |
dcterms.references | Vovk, V. Kernel ridge regression. In Empirical Inference; Schölkopf, B., Luo, Z., Vovk, V., Eds.; Springer: Berlin, Germany, 2013; pp. 105–116. | eng |
dcterms.references | Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme Learning Machine for Regression and Multiclass Classification. IEEE Trans. Syst. Man, Cybern. Part Cybern. 2011, 42, 513–529. | eng |
dcterms.references | Deng, W.Y.; Ong, Y.S.; Tan, P.S.; Zheng, Q.H. Online sequential reduced kernel extreme learning machine. Neurocomputing 2016, 174, 72–84. | eng |
dcterms.references | Wu, L.; Peng, Y.; Fan, J.; Wang, Y.; Huang, G. A novel kernel extreme learning machine model coupled with K-means clustering and firefly algorithm for estimating monthly reference evapotranspiration in parallel computation. Agric. Water Manag. 2021, 245, 106624. | eng |
dcterms.references | Huang, G.B.; Chen, L.; Siew, C.K. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 2006, 17, 879–892. | eng |
dcterms.references | Rong, H.J.; Ong, Y.S.; Tan, A.H.; Zhu, Z. A fast pruned-extreme learning machine for classification problem. Neurocomputing 2008, 72, 359–366. | eng |
dcterms.references | Zhu, Q.Y.; Qin, A.K.; Suganthan, P.N.; Huang, G.B. Evolutionary extreme learning machine. Pattern Recognit. 2005, 38, 1759–1763. | eng |
dcterms.references | Gelvez-Almeida, E.; Baldera-Moreno, Y.; Huérfano, Y.; Vera, M.; Mora, M.; Barrientos, R. Parallel methods for linear systems solution in extreme learning machines: An overview. J. Phys. Conf. Ser. 2020, 1702, 012017. | eng |
dcterms.references | Lu, S.; Wang, X.; Zhang, G.; Zhou, X. Effective algorithms of the Moore-Penrose inverse matrices for extreme learning machine. Intell. Data Anal. 2015, 19, 743–760. | eng |
dcterms.references | Young, D.M. Iterative Solution of Large Linear Systems; Elsevier: Orlando, FL, USA, 2014. | eng |
dcterms.references | Li, J.; Li, L.; Wang, Q.; Xue, W.; Liang, J.; Shi, J. Parallel optimization and application of unstructured sparse triangular solver on new generation of sunway architecture. Parallel Comput. 2024, 120, 103080. | eng |
dcterms.references | Gelvez-Almeida, E.; Barrientos, R.J.; Vilches-Ponce, K.; Mora, M. A Parallel Computing Method for the Computation of the Moore–Penrose Generalized Inverse for Shared-Memory Architectures. IEEE Access 2023, 11, 134834–134845. | eng |
dcterms.references | Lukyanenko, D. Parallel algorithm for solving overdetermined systems of linear equations, taking into account round-off errors. Algorithms 2023, 16, 242. | eng |
dcterms.references | Suzuki, K.; Fukaya, T.; Iwashita, T. A novel ILU preconditioning method with a block structure suitable for SIMD vectorization. J. Comput. Appl. Math. 2023, 419, 114687. | eng |
dcterms.references | Sabelfeld, K.K.; Kireev, S.; Kireeva, A. Parallel implementations of randomized vector algorithm for solving large systems of linear equations. J. Supercomput. 2023, 79, 10555–10569. | eng |
dcterms.references | Catalán, S.; Herrero, J.R.; Igual, F.D.; Quintana-Ortí, E.S.; Rodríguez-Sánchez, R. Fine-grain task-parallel algorithms for matrix factorizations and inversion on many-threaded CPUs. Concurr. Comput. Pract. Exp. 2022, 35, e6999. | eng |
dcterms.references | Rivera-Zamarripa, L.; Adj, G.; Cruz-Cortés, N.; Aguilar-Ibañez, C.; Rodríguez-Henríquez, F. A Parallel Strategy for Solving Sparse Linear Systems Over Finite Fields. Comput. Sist. 2022, 26, 493–504. | eng |
dcterms.references | Li, K.; Han, X. A distributed Gauss-Newton method for distribution system state estimation. Int. J. Electr. Power Energy Syst. 2022, 136, 107694. | eng |
dcterms.references | Hwang, H.S.; Ro, J.H.; Park, C.Y.; You, Y.H.; Song, H.K. Efficient Gauss-Seidel Precoding with Parallel Calculation in Massive MIMO Systems. CMC-Comput. Mater. Contin. 2022, 70, 491–504 | eng |
dcterms.references | Catalán, S.; Igual, F.D.; Rodríguez-Sánchez, R.; Herrero, J.R.; Quintana-Ortí, E.S. A New Generation of Task-Parallel Algorithms for Matrix Inversion in Many-Threaded CPUs. In Proceedings of the 12th International Workshop on Programming Models and Applications for Multicores and Manycores, Association for Computing Machinery, Virtual, 22 February 2021; pp. 1–10 | eng |
dcterms.references | Marrakchi, S.; Jemni, M. Parallel gaussian elimination of symmetric positive definite band matrices for shared-memory multicore architectures. RAIRO Oper. Res. 2021, 55, 905–927. | eng |
dcterms.references | Lu, Y.; Luo, Y.; Lian, H.; Jin, Z.; Liu, W. Implementing LU and Cholesky factorizations on artificial intelligence accelerators. CCF Trans. High Perform. Comput. 2021, 3, 286–297. | eng |
dcterms.references | Lee, W.K.; Achar, R. GPU-Accelerated Adaptive PCBSO Mode-Based Hybrid RLA for Sparse LU Factorization in Circuit Simulation. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2021, 40, 2320–2330. | eng |
dcterms.references | Zhang, X.W.; Zuo, L.; Li, M.; Guo, J.X. High-throughput FPGA implementation of matrix inversion for control systems. IEEE Trans. Ind. Electron. 2021, 68, 6205–6216. | eng |
dcterms.references | Rubensson, E.H.; Artemov, A.G.; Kruchinina, A.; Rudberg, E. Localized inverse factorization. IMA J. Numer. Anal. 2021, 41, 729–763 | eng |
dcterms.references | Rodriguez Borbon, J.M.; Huang, J.; Wong, B.M.; Najjar, W. Acceleration of Parallel-Blocked QR Decomposition of Tall-and-Skinny Matrices on FPGAs. ACM Trans. Archit. Code Optim. TACO 2021, 18, 27. | eng |
dcterms.references | Duan, T.; Dinavahi, V. A novel linking-domain extraction decomposition method for parallel electromagnetic transient simulation of large-scale AC/DC networks. IEEE Trans. Power Deliv. 2021, 36, 957–965. | eng |
dcterms.references | Shäfer, F.; Katzfuss, M.; Owhadi, H. Sparse Cholesky Factorization by Kullback-Leibler Minimization. SIAM J. Sci. Comput. 2021, 43, A2019–A2046. | eng |
dcterms.references | Boffi, D.; Lu, Z.; Pavarino, L.F. Iterative ILU preconditioners for linear systems and eigenproblems. J. Comput. Math. 2021, 39, 633–654. | eng |
dcterms.references | Ahmadi, A.; Manganiello, F.; Khademi, A.; Smith, M.C. A Parallel Jacobi-Embedded Gauss-Seidel Method. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 1452–1464 | eng |
dcterms.references | Liu, Y.; Sid-Lakhdar, W.; Rebrova, E.; Ghysels, P.; Li, X.S. A parallel hierarchical blocked adaptive cross approximation algorithm. Int. J. High Perform. Comput. Appl. 2020, 34, 394–408. | eng |
dcterms.references | Davis, T.A.; Duff, I.S.; Nakov, S. Design and implementation of a parallel markowitz threshold algorithm. SIAM J. Matrix Anal. Appl. 2020, 41, 573–590. | eng |
dcterms.references | Yang, X.; Wang, N.; Xu, L. A parallel Gauss-Seidel method for convex problems with separable structure. Numer. Algebr. Control. Optim. 2020, 10, 557–570. | eng |
dcterms.references | Li, R.; Zhang, C. Efficient parallel implementations of sparse triangular solves for GPU architectures. In Proceedings of the 2020 SIAM Conference on Parallel Processing for Scientific Computing, SIAM, Washington, DC, USA, 12–15 February 2020; pp. 106–117. | eng |
dcterms.references | Singh, N.; Ma, L.; Yang, H.; Solomonik, E. Comparison of Accuracy and Scalability of Gauss-Newton and Alternating Least Squares for CP Decomposition. arXiv 2020, arXiv:1910.12331. | eng |
dcterms.references | Alyahya, H.; Mehmood, R.; Katib, I. Parallel iterative solution of large sparse linear equation systems on the intel MIC architecture. In Smart Infrastructure and Applications; Mehmood, R., See, S., Katib, I., Chlamtac, I., Eds.; Springer: Cham, Switzerland, 2020; pp. 377–407. | eng |
dcterms.references | Huang, G.H.; Xu, Y.Z.; Yi, X.W.; Xia, M.; Jiao, Y.Y.; Zhang, S. Highly efficient iterative methods for solving linear equations of three-dimensional sphere discontinuous deformation analysis. Int. J. Numer. Anal. Methods Geomech. 2020, 44, 1301–1314. | eng |
dcterms.references | Kirk, D.B.; Mei W. Hwu, W. Programming Massively Parallel Processors: A Hands-On Approach, 3 ed.; Morgan Kaufmann: Cambridge, UK, 2016. | eng |
dcterms.references | Chapman, B.; Jost, G.; Pas, R.V.D. Using OpenMP: Portable Shared Memory Parallel Programming; The MIT Press: London, UK, 2008. 91. Xianyi, Z.; Kroeker, M. OpenBLAS: An Optimized BLAS Library. 2022. Available online: https://www.openblas.net (accessed on 20 September 2022). | eng |
dcterms.references | Xianyi, Z.; Kroeker, M. OpenBLAS: An Optimized BLAS Library. 2022. Available online: https://www.openblas.net (accessed on 20 September 2022). | eng |
dcterms.references | University of Tennessee; University of California; University of Colorado Denver; NAG Ltd. LAPACK—Linear Algebra PACKage. Netlib Repository at UTK and ORNL. 2022. Available online: http://www.netlib.org/lapack/ (accessed on 15 September 2022). | eng |
dcterms.references | Gropp, W.; Lusk, E.; Skjellum, A. Using MPI: Portable Parallel Programming with the Message-Passing Interface (Scientific and Engineering Computation Series), 3rd ed.; The MIT Press: London, UK, 2014. | eng |
dcterms.references | Intel Corporation. Intel oneAPI Math Kernel Library. Intel Corporation. 2020. Available online: https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onemkl.html (accessed on 14 September 2022). | eng |
dcterms.references | NVIDIA Corporation. CUDA: Compute Unified Device Architecture. NVIDIA Corporation. 2022. Available online: http://developer.nvidia.com/object/cuda.html (accessed on 15 September 2022). | eng |
dcterms.references | Iles, G.; Jones, J.; Rose, A. Experience powering Xilinx Virtex-7 FPGAs. J. Instrum. 2013, 8, 12037. | eng |
dcterms.references | Wang, K.; Huo, S.; Liu, B.; Wang, Z.; Ren, T. An Adaptive Low Computational Cost Alternating Direction Method of Multiplier for RELM Large-Scale Distributed Optimization. Mathematics 2024, 12, 43. | eng |
dcterms.references | Jagadeesan, J.; Subashree, D.; Kirupanithi, D.N. An Optimized Ensemble Support Vector Machine-Based Extreme Learning Model for Real-Time Big Data Analytics and Disaster Prediction. Cogn. Comput. 2023, 15, 2152–2174. | eng |
dcterms.references | Wang, Z.; Huo, S.; Xiong, X.; Wang, K.; Liu, B. A Maximally Split and Adaptive Relaxed Alternating Direction Method of Multipliers for Regularized Extreme Learning Machines. Mathematics 2023, 11, 3198. | eng |
dcterms.references | Wang, G.; Soo, Z.S.D. BE-ELM: Biological ensemble Extreme Learning Machine without the need of explicit aggregation. Expert Syst. Appl. 2023, 230, 120677. | eng |
dcterms.references | Zhang, Y.; Dai, Y.; Wu, Q. A novel regularization paradigm for the extreme learning machine. Neural Process. Lett. 2023, 55, 7009–7033. | eng |
dcterms.references | Gelvez-Almeida, E.; Barrientos, R.J.; Vilches-Ponce, K.; Mora, M. Parallel training of a set of online sequential extreme learning machines. In Proceedings of the 2022 41st International Conference of the Chilean Computer Science Society (SCCC), Santiago, Chile, 21–25 November 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–4. | eng |
dcterms.references | Gelvez-Almeida, E.; Barrientos, R.J.; Vilches-Ponce, K.; Mora, M. Parallel model of online sequential extreme learning machines for classification problems with large-scale databases. In Proceedings of the XI Jornadas de Cloud Computing, Big Data & Emerging Topics, Universidad de la Plata, La Plata, Argentina, 27–29 June 2023. | eng |
dcterms.references | Chidambaram, S.; Gowthul Alam, M. An Integration of Archerfish Hunter Spotted Hyena Optimization and Improved ELM Classifier for Multicollinear Big Data Classification Tasks. Neural Process. Lett. 2022, 54, 2049–2077. | eng |
dcterms.references | Hira, S.; Bai, A. A Novel MapReduced Based Parallel Feature Selection and Extreme Learning for Micro Array Cancer Data Classification. Wirel. Pers. Commun. 2022, 123, 1483–1505. | eng |
dcterms.references | Rajpal, S.; Agarwal, M.; Rajpal, A.; Lakhyani, N.; Saggar, A.; Kumar, N. COV-ELM classifier: An Extreme Learning Machine based identification of COVID-19 using Chest X-Ray Images. Intell. Decis. Technol. 2022, 16, 193–203. | eng |
dcterms.references | Zha, L.; Ma, K.; Li, G.; Fang, Q.; Hu, X. A robust double-parallel extreme learning machine based on an improved M-estimation algorithm. Adv. Eng. Inform. 2022, 52, 101606. | eng |
dcterms.references | Vidhya, M.; Aji, S. Parallelized extreme learning machine for online data classification. Appl. Intell. 2022, 52. | eng |
dcterms.references | Rath, S.; Tripathy, A.; Swagatika, S. Application of ELM-mapreduce technique in stock market forecasting. In Intelligent and Cloud Computing; Mishra, D., Buyya, R., Mohapatra, P., Patnaik, S., Eds.; Springer: Singapore, 2021; Volume 2, pp. 469–476. | eng |
dcterms.references | Ji, H.; Wu, G.; Wang, G. Accelerating ELM training over data streams. Int. J. Mach. Learn. Cybern. 2021, 12, 87–102. | eng |
dcterms.references | Luo, F.; Liu, G.; Guo, W.; Chen, G.; Xiong, N. ML-KELM: A Kernel Extreme Learning Machine Scheme for Multi-Label Classification of Real Time Data Stream in SIoT. IEEE Trans. Netw. Sci. Eng. 2021, 9, 1–12. | eng |
dcterms.references | Tahir, G.A.; Loo, C.K. Progressive kernel extreme learning machine for food image analysis via optimal features from quality resilient CNN. Appl. Sci. 2021, 11, 9562. | eng |
dcterms.references | Dong, Z.; Lai, C.S.; Zhang, Z.; Qi, D.; Gao, M.; Duan, S. Neuromorphic extreme learning machines with bimodal memristive synapses. Neurocomputing 2021, 453, 38–49. | eng |
dcterms.references | Ezemobi, E.; Tonoli, A.; Silvagni, M. Battery State of Health Estimation with Improved Generalization Using Parallel Layer Extreme Learning Machine. Energies 2021, 14, 2243. | eng |
dcterms.references | Xu, Y.; Liu, H.; Long, Z. A distributed computing framework for wind speed big data forecasting on Apache Spark. Sustain. Energy Technol. Assess. 2020, 37, 100582. | eng |
dcterms.references | Li, X.; Liu, J.; Niu, P. Least Square Parallel Extreme Learning Machine for Modeling NOx Emission of a 300MW Circulating Fluidized Bed Boiler. IEEE Access 2020, 8, 79619–79636. | eng |
dcterms.references | Liang, Q.; Long, J.; Coppola, G.; Zhang, D.; Sun, W. Novel decoupling algorithm based on parallel voltage extreme learning machine (PV-ELM) for six-axis F/M sensors. Robot.-Comput.-Integr. Manuf. 2019, 57, 303–314. | eng |
dcterms.references | Dokeroglu, T.; Sevinc, E. Evolutionary parallel extreme learning machines for the data classification problem. Comput. Ind. Eng. 2019, 130, 237–249. | eng |
dcterms.references | Dean, J.; Ghemawat, S. MapReduce: Simplified data processing on large clusters. In Proceedings of the 6th Symposium on Operating Systems Design and Implementation. USENIX Association, San Francisco, CA, USA, 6–8 December 2004; Volume 6, pp. 137–149. | eng |
dcterms.references | Dean, J.; Ghemawat, S. MapReduce: Simplified data processing on large clusters. Commun. ACM 2008, 51, 107–113. | eng |
dcterms.references | Gayathri, T.; Bhaskari, D.L. Oppositional Cuckoo Search Optimization based Clustering with Classification Model for Big Data Analytics in Healthcare Environment. J. Appl. Sci. Eng. 2021, 25, 743–751. | eng |
dcterms.references | Yao, L.; Ge, Z. Distributed parallel deep learning of Hierarchical Extreme Learning Machine for multimode quality prediction with big process data. Eng. Appl. Artif. Intell. 2019, 81, 450–465. | eng |
dcterms.references | Ku, J.; Zheng, B. Distributed extreme learning machine with kernels based on MapReduce for spectral-spatial classification of hyperspectral image. In Proceedings of the 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), Guangzhou, China, 21–24 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 325–332. | eng |
dcterms.references | Pang, J.; Gu, Y.; Xu, J.; Kong, X.; Yu, G. Parallel multi-graph classification using extreme learning machine and MapReduce. Neurocomputing 2017, 261, 171–183. | eng |
dcterms.references | Inaba, F.K.; Salles, E.O.T.; Perron, S.; Caporossi, G. DGR-ELM–distributed generalized regularized ELM for classification. Neurocomputing 2018, 275, 1522–1530. | eng |
dcterms.references | Huang, S.; Wang, B.; Qiu, J.; Yao, J.; Wang, G.; Yu, G. Parallel ensemble of online sequential extreme learning machine based on MapReduce. Neurocomputing 2016, 174, 352–367. | eng |
dcterms.references | Wang, B.; Huang, S.; Qiu, J.; Liu, Y.; Wang, G. Parallel online sequential extreme learning machine based on MapReduce. Neurocomputing 2015, 149, 224–232. | eng |
dcterms.references | Bi, X.; Zhao, X.; Wang, G.; Zhang, P.; Wang, C. Distributed Extreme Learning Machine with kernels based on MapReduce. Neurocomputing 2015, 149, 456–463. | eng |
dcterms.references | Han, D.H.; Zhang, X.; Wang, G.R. Classifying Uncertain and Evolving Data Streams with Distributed Extreme Learning Machine. J. Comput. Sci. Technol. 2015, 30, 874–887. | eng |
dcterms.references | Xiang, J.; Westerlund, M.; Sovilj, D.; Pulkkis, G. Using extreme learning machine for intrusion detection in a big data environment. In Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, Association for Computing Machinery, Scottsdale, AZ, USA, 7 November 2014; pp. 73–82. | eng |
dcterms.references | Xin, J.; Wang, Z.; Chen, C.; Ding, L.; Wang, G.; Zhao, Y. ELM*: Distributed extreme learning machine with MapReduce. World Wide Web 2014, 17, 1189–1204. | eng |
dcterms.references | He, Q.; Shang, T.; Zhuang, F.; Shi, Z. Parallel extreme learning machine for regression based on MapReduce. Neurocomputing 2013, 102, 52–58, | eng |
dcterms.references | Zaharia, M.; Chowdhury, M.; Franklin, M.J.; Shenker, S.; Stoica, I. Spark: Cluster Computing with Working Sets. In Proceedings of the 2nd USENIX Conference on Hot Topics in Cloud Computing, USENIX Association, Boston, MA, USA, 22–25 June 2010; pp. 1–10. | eng |
dcterms.references | Jaya Lakshmi, A.; Venkatramaphanikumar, S.; Venkata, K.K.K. Prediction of Cardiovascular Risk Using Extreme Learning Machine-Tree Classifier on Apache Spark Cluster. Recent Adv. Comput. Sci. Commun. 2022, 15, 443–455. | eng |
dcterms.references | Kozik, R.; Choraś, M.; Ficco, M.; Palmieri, F. A scalable distributed machine learning approach for attack detection in edge computing environments. J. Parallel Distrib. Comput. 2018, 119, 18–26. | eng |
dcterms.references | Kozik, R. Distributing extreme learning machines with Apache Spark for NetFlow-based malware activity detection. Pattern Recognit. Lett. 2018, 101, 14–20. | eng |
dcterms.references | Oneto, L.; Fumeo, E.; Clerico, G.; Canepa, R.; Papa, F.; Dambra, C.; Mazzino, N.; Anguita, D. Dynamic Delay Predictions for Large-Scale Railway Networks: Deep and Shallow Extreme Learning Machines Tuned via Thresholdout. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2754–2767. | eng |
dcterms.references | Oneto, L.; Fumeo, E.; Clerico, G.; Canepa, R.; Papa, F.; Dambra, C.; Mazzino, N.; Anguita, D. Train Delay Prediction Systems: A Big Data Analytics Perspective. Big Data Res. 2018, 11, 54–64. | eng |
dcterms.references | Duan, M.; Li, K.; Liao, X.; Li, K. A Parallel Multiclassification Algorithm for Big Data Using an Extreme Learning Machine. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 2337–2351. | eng |
dcterms.references | Liu, T.; Fang, Z.; Zhao, C.; Zhou, Y. Parallelization of a series of extreme learning machine algorithms based on Spark. In Proceedings of the 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), IEEE, Okayama, Japan, 26–29 June 2016; pp. 1–5. | eng |
dcterms.references | Navarro, C.A.; Carrasco, R.; Barrientos, R.J.; Riquelme, J.A.; Vega, R. GPU Tensor Cores for Fast Arithmetic Reductions. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 72–84. | eng |
dcterms.references | Hou, X.C.; Lai, X.P.; Cao, J.W. A Maximally Split Generalized ADMM for Regularized Extreme Learning Machines. Tien Tzu Hsueh Pao/Acta Electron. Sin. 2021, 49, 625–630. | eng |
dcterms.references | El Zini, J.; Rizk, Y.; Awad, M. An optimized parallel implementation of non-iteratively trained recurrent neural networks. J. Artif. Intell. Soft Comput. Res. 2021, 11, 33–50. | eng |
dcterms.references | Li, S.; Niu, X.; Dou, Y.; Lv, Q.; Wang, Y. Heterogeneous blocked CPU-GPU accelerate scheme for large scale extreme learning machine. Neurocomputing 2017, 261, 153–163. | eng |
dcterms.references | Chen, C.; Li, K.; Ouyang, A.; Tang, Z.; Li, K. GPU-Accelerated Parallel Hierarchical Extreme Learning Machine on Flink for Big Data. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2740–2753. | eng |
dcterms.references | Lam, D.; Wunsch, D. Unsupervised Feature Learning Classification With Radial Basis Function Extreme Learning Machine Using Graphic Processors. IEEE Trans. Cybern. 2016, 47, 224–231. | eng |
dcterms.references | Van Heeswijk, M.; Miche, Y.; Oja, E.; Lendasse, A. GPU-accelerated and parallelized ELM ensembles for large-scale regression. Neurocomputing 2011, 74, 2430–2437. | eng |
dcterms.references | Jezowicz, T.; Gajdo, P.; Uher, V.; Snáel, V. Classification with extreme learning machine on GPU. In Proceedings of the 2015 International Conference on Intelligent Networking and Collaborative Systems, Taipei, Taiwan, 2–4 September 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 116–122. | eng |
dcterms.references | Li, J.; Guo, B.; Shen, Y.; Li, D.; Wang, J.; Huang, Y.; Li, Q. GPU-memory coordinated energy saving approach based on extreme learning machine. In Proceedings of the 2015 IEEE 17th International Conference on High Performance Computing and Communications, 2015 IEEE 7th International Symposium on Cyberspace Safety and Security, and 2015 IEEE 12th International Conference on Embedded Software and Systems, New York, NY, USA, 24–26 August 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 827–830. | eng |
dcterms.references | Krawczyk, B. GPU-Accelerated Extreme Learning Machines for Imbalanced Data Streams with Concept Drift. Procedia Comput. Sci. 2016, 80, 1692–1701. | eng |
dcterms.references | Dwivedi, S.; Vardhan, M.; Tripathi, S. Multi-Parallel Adaptive Grasshopper Optimization Technique for Detecting Anonymous Attacks in Wireless Networks. Wirel. Pers. Commun. 2021, 119, 2787–2816. | eng |
dcterms.references | Li, Y.; Zhang, S.; Yin, Y.; Xiao, W.; Zhang, J. Parallel one-class extreme learning machine for imbalance learning based on Bayesian approach. J. Ambient. Intell. Humaniz. Comput. 2024, 15, 1745–1762. | eng |
dcterms.references | Ming, Y.; Zhu, E.; Wang, M.; Ye, Y.; Liu, X.; Yin, J. DMP-ELMs: Data and model parallel extreme learning machines for large-scale learning tasks. Neurocomputing 2018, 320, 85–97. | eng |
dcterms.references | Henríquez, P.A.; Ruz, G.A. Extreme learning machine with a deterministic assignment of hidden weights in two parallel layers. Neurocomputing 2017, 226, 109–116. | eng |
dcterms.references | Luo, M.; Zhang, L.; Liu, J.; Guo, J.; Zheng, Q. Distributed extreme learning machine with alternating direction method of multiplier. Neurocomputing 2017, 261, 164–170. | eng |
dcterms.references | Wang, Y.; Dou, Y.; Liu, X.; Lei, Y. PR-ELM: Parallel regularized extreme learning machine based on cluster. Neurocomputing 2016, 173, 1073–1081 | eng |
oaire.version | info:eu-repo/semantics/publishedVersion |