All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Development of advanced intrusion detection approach using machine and ensemble learning for industrial internet of things networks
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 799-827The Industrial Internet of Things (IIoT) networks plays a significant role in enhancing industrial automation systems by connecting industrial devices for real time data monitoring and predictive maintenance. However, this connectivity introduces new vulnerabilities which demand the development of advanced intrusion detection systems. The nuclear facilities are considered one of the closest examples of critical infrastructures that suffer from high vulnerability through the connectivity of IIoT networks. This paper develops a robust intrusion detection approach using machine and ensemble learning algorithms specifically determined for IIoT networks. This approach can achieve optimal performance with low time complexity suitable for real-time IIoT networks. For each algorithm, Grid Search is determined to fine-tune the hyperparameters for optimizing the performance while ensuring time computational efficiency. The proposed approach is investigated on recent IIoT intrusion detection datasets, WUSTL-IIOT-2021 and Edge-IIoT-2022 to cover a wider range of attacks with high precision and minimum false alarms. The study provides the effectiveness of ten machine and ensemble learning models on selected features of the datasets. Synthetic Minority Over-sampling Technique (SMOTE)-based multi-class balancing is used to manipulate dataset imbalances. The ensemble voting classifier is used to combine the best models with the best hyperparameters for raising their advantages to improve the performance with the least time complexity. The machine and ensemble learning algorithms are evaluated based on accuracy, precision, recall, F1 Score, and time complexity. This evaluation can discriminate the most suitable candidates for further optimization. The proposed approach is called the XCL approach that is based on Extreme Gradient Boosting (XGBoost), CatBoost (Categorical Boosting), and Light Gradient- Boosting Machine (LightGBM). It achieves high accuracy, lower false positive rate, and efficient time complexity. The results refer to the importance of ensemble strategies, algorithm selection, and hyperparameter optimization in enhancing the performance to detect the different intrusions across the IIoT datasets over the other models. The developed approach produced a higher accuracy of 99.99% on the WUSTL-IIOT-2021 dataset and 100% on the Edge-IIoTset dataset. Our experimental evaluations have been extended to the CIC-IDS-2017 dataset. These additional evaluations not only highlight the applicability of the XCL approach on a wide spectrum of intrusion detection scenarios but also confirm its scalability and effectiveness in real-world complex network environments.
-
Mathematical model and heuristic methods of distributed computations organizing in the Internet of Things systems
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 851-870Currently, a significant development has been observed in the direction of distributed computing theory, where computational tasks are solved collectively by resource-constrained devices. In practice, this scenario is implemented when processing data in Internet of Things systems, with the aim of reducing system latency and network infrastructure load, as data is processed on edge network computing devices. However, the rapid growth and widespread adoption of IoT systems raise questions about the need to develop methods for reducing the resource intensity of computations. The resource constraints of computing devices pose the following issues regarding the distribution of computational resources: firstly, the necessity to account for the transit cost between different devices solving various tasks; secondly, the necessity to consider the resource cost associated directly with the process of distributing computational resources, which is particularly relevant for groups of autonomous devices such as drones or robots. An analysis of modern publications available in open access demonstrated the absence of proposed models or methods for distributing computational resources that would simultaneously take into account all these factors, making the creation of a new mathematical model for organizing distributed computing in IoT systems and its solution methods topical. This article proposes a novel mathematical model for distributing computational resources along with heuristic optimization methods, providing an integrated approach to implementing distributed computing in IoT systems. A scenario is considered where there exists a leader device within a group that makes decisions concerning the allocation of computational resources, including its own, for distributed task resolution involving information exchanges. It is also assumed that no prior knowledge exists regarding which device will assume the role of leader or the migration paths of computational tasks across devices. Experimental results have shown the effectiveness of using the proposed models and heuristics: achieving up to a 52% reduction in resource costs for solving computational problems while accounting for data transit costs, saving up to 73% of resources through supplementary criteria optimizing task distribution based on minimizing fragment migrations and distances, and decreasing the resource cost of resolving the computational resource distribution problem by up to 28 times with reductions in distribution quality up to 10%.
-
Review of algorithmic solutions for deployment of neural networks on lite devices
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1601-1619In today’s technology-driven world, lite devices like Internet of Things (IoT) devices and microcontrollers (MCUs) are becoming increasingly common. These devices are more energyefficient and affordable, often with reduced features compared to the standard versions such as very limited memory and processing power for typical machine learning models. However, modern machine learning models can have millions of parameters, resulting in a large memory footprint. This complexity not only makes it difficult to deploy these large models on resource constrained devices but also increases the risk of latency and inefficiency in processing, which is crucial in some cases where real-time responses are required such as autonomous driving and medical diagnostics. In recent years, neural networks have seen significant advancements in model optimization techniques that help deployment and inference on these small devices. This narrative review offers a thorough examination of the progression and latest developments in neural network optimization, focusing on key areas such as quantization, pruning, knowledge distillation, and neural architecture search. It examines how these algorithmic solutions have progressed and how new approaches have improved upon the existing techniques making neural networks more efficient. This review is designed for machine learning researchers, practitioners, and engineers who may be unfamiliar with these methods but wish to explore the available techniques. It highlights ongoing research in optimizing networks for achieving better performance, lowering energy consumption, and enabling faster training times, all of which play an important role in the continued scalability of neural networks. Additionally, it identifies gaps in current research and provides a foundation for future studies, aiming to enhance the applicability and effectiveness of existing optimization strategies.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




