All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Applying artificial neural network for the selection of mixed refrigerant by boiling curve
Computer Research and Modeling, 2022, v. 14, no. 3, pp. 593-608The paper provides a method for selecting the composition of a refrigerant with a given isobaric cooling curve using an artificial neural network (ANN). This method is based on the use of 1D layers of a convolutional neural network. To train the neural network, we applied a technological model of a simple heat exchanger in the UniSim design program, using the Peng – Robinson equation of state.We created synthetic database on isobaric boiling curves of refrigerants of different compositions using the technological model. To record the database, an algorithm was developed in the Python programming language, and information on isobaric boiling curves for 1 049 500 compositions was uploaded using the COM interface. The compositions have generated by Monte Carlo method. Designed architecture of ANN allows select composition of a mixed refrigerant by 101 points of boiling curve. ANN gives mole flows of mixed refrigerant by composition (methane, ethane, propane, nitrogen) on the output layer. For training ANN, we used method of cyclical learning rate. For results demonstration we selected MR composition by natural gas cooling curve with a minimum temperature drop of 3 К and a maximum temperature drop of no more than 10 К, which turn better than we predicted via UniSim SQP optimizer and better than predicted by $k$-nearest neighbors algorithm. A significant value of this article is the fact that an artificial neural network can be used to select the optimal composition of the refrigerant when analyzing the cooling curve of natural gas. This method can help engineers select the composition of the mixed refrigerant in real time, which will help reduce the energy consumption of natural gas liquefaction.
-
Dual-pass Feature-Fused SSD model for detecting multi-scale images of workers on the construction site
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 57-73When recognizing workers on images of a construction site obtained from surveillance cameras, a situation is typical in which the objects of detection have a very different spatial scale relative to each other and other objects. An increase in the accuracy of detection of small objects can be achieved by using the Feature-Fused modification of the SSD detector. Together with the use of overlapping image slicing on the inference, this model copes well with the detection of small objects. However, the practical use of this approach requires manual adjustment of the slicing parameters. This reduces the accuracy of object detection on scenes that differ from the scenes used in training, as well as large objects. In this paper, we propose an algorithm for automatic selection of image slicing parameters depending on the ratio of the characteristic geometric dimensions of objects in the image. We have developed a two-pass version of the Feature-Fused SSD detector for automatic determination of optimal image slicing parameters. On the first pass, a fast truncated version of the detector is used, which makes it possible to determine the characteristic sizes of objects of interest. On the second pass, the final detection of objects with slicing parameters selected after the first pass is performed. A dataset was collected with images of workers on a construction site. The dataset includes large, small and diverse images of workers. To compare the detection results for a one-pass algorithm without splitting the input image, a one-pass algorithm with uniform splitting, and a two-pass algorithm with the selection of the optimal splitting, we considered tests for the detection of separately large objects, very small objects, with a high density of objects both in the foreground and in the background, only in the background. In the range of cases we have considered, our approach is superior to the approaches taken in comparison, allows us to deal well with the problem of double detections and demonstrates a quality of 0.82–0.91 according to the mAP (mean Average Precision) metric.
-
MPI implementations of Conway’s Game of Life and Kohomoto-Oono cellular automata
Computer Research and Modeling, 2010, v. 2, no. 3, pp. 319-322Views (last year): 11.Results obtained during practical training session on MPI during high perfomance computing summer school MIPT-2010 are discussed. MPI technology were one of technologies proposed to participants for realization of project. 3D version of Conway’s Game of Life was proposed as a project. Algorithms used in the development, theoretical and practical assessment of their scalability is analyzed.
-
Review of algorithmic solutions for deployment of neural networks on lite devices
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1601-1619In today’s technology-driven world, lite devices like Internet of Things (IoT) devices and microcontrollers (MCUs) are becoming increasingly common. These devices are more energyefficient and affordable, often with reduced features compared to the standard versions such as very limited memory and processing power for typical machine learning models. However, modern machine learning models can have millions of parameters, resulting in a large memory footprint. This complexity not only makes it difficult to deploy these large models on resource constrained devices but also increases the risk of latency and inefficiency in processing, which is crucial in some cases where real-time responses are required such as autonomous driving and medical diagnostics. In recent years, neural networks have seen significant advancements in model optimization techniques that help deployment and inference on these small devices. This narrative review offers a thorough examination of the progression and latest developments in neural network optimization, focusing on key areas such as quantization, pruning, knowledge distillation, and neural architecture search. It examines how these algorithmic solutions have progressed and how new approaches have improved upon the existing techniques making neural networks more efficient. This review is designed for machine learning researchers, practitioners, and engineers who may be unfamiliar with these methods but wish to explore the available techniques. It highlights ongoing research in optimizing networks for achieving better performance, lowering energy consumption, and enabling faster training times, all of which play an important role in the continued scalability of neural networks. Additionally, it identifies gaps in current research and provides a foundation for future studies, aiming to enhance the applicability and effectiveness of existing optimization strategies.
-
Advanced neural network models for UAV-based image analysis in remote pathology monitoring of coniferous forests
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 641-663The key problems of remote forest pathology monitoring for coniferous forests affected by insect pests have been analyzed. It has been demonstrated that addressing these tasks requires the use of multiclass classification results for coniferous trees in high- and ultra-high-resolution images, which are promptly obtained through monitoring via satellites or unmanned aerial vehicles (UAVs). An analytical review of modern models and methods for multiclass classification of coniferous forest images was conducted, leading to the development of three fully convolutional neural network models: Mo-U-Net, At-Mo-U-Net, and Res-Mo-U-Net, all based on the classical U-Net architecture. Additionally, the Segformer transformer model was modified to suit the task. For RGB images of fir trees Abies sibirica affected by the four-eyed bark beetle Polygraphus proximus, captured using a UAV-mounted camera, two datasets were created: the first dataset contains image fragments and their corresponding reference segmentation masks sized 256 × 256 × 3 pixels, while the second dataset contains fragments sized 480 × 480 × 3 pixels. Comprehensive studies were conducted on each trained neural network model to evaluate both classification accuracy for assessing the degree of damage (health status) of Abies sibirica trees and computation speed using test datasets from each set. The results revealed that for fragments sized 256 × 256 × 3 pixels, the At-Mo-U-Net model with an attention mechanism is preferred alongside the Modified Segformer model. For fragments sized 480 × 480 × 3 pixels, the Res-Mo-U-Net hybrid model with residual blocks demonstrated superior performance. Based on classification accuracy and computation speed results for each developed model, it was concluded that, for production-scale multiclass classification of affected fir trees, the Res-Mo-U-Net model is the most suitable choice. This model strikes a balance between high classification accuracy and fast computation speed, meeting conflicting requirements effectively.
-
CUDA and OpenCL implementations of Conway’s Game of Life cellular automata
Computer Research and Modeling, 2010, v. 2, no. 3, pp. 323-326Views (last year): 9. Citations: 3 (RSCI).In this article the experience of reading “CUDA and OpenCL programming” course during high perfomance computing summer school MIPT-2010 is analyzed. Content of lectures and practical tasks, as well as manner of presenting of the material are regarded. Performance issues of different algorithms implemented by students at practical training session are dicussed.
-
Image classification based on deep learning with automatic relevance determination and structured Bayesian pruning
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 927-938Deep learning’s power stems from complex architectures; however, these can lead to overfitting, where models memorize training data and fail to generalize to unseen examples. This paper proposes a novel probabilistic approach to mitigate this issue. We introduce two key elements: Truncated Log-Uniform Prior and Truncated Log-Normal Variational Approximation, and Automatic Relevance Determination (ARD) with Bayesian Deep Neural Networks (BDNNs). Within the probabilistic framework, we employ a specially designed truncated log-uniform prior for noise. This prior acts as a regularizer, guiding the learning process towards simpler solutions and reducing overfitting. Additionally, a truncated log-normal variational approximation is used for efficient handling of the complex probability distributions inherent in deep learning models. ARD automatically identifies and removes irrelevant features or weights within a model. By integrating ARD with BDNNs, where weights have a probability distribution, we achieve a variational bound similar to the popular variational dropout technique. Dropout randomly drops neurons during training, encouraging the model not to rely heavily on any single feature. Our approach with ARD achieves similar benefits without the randomness of dropout, potentially leading to more stable training.
To evaluate our approach, we have tested the model on two datasets: the Canadian Institute For Advanced Research (CIFAR-10) for image classification and a dataset of Macroscopic Images of Wood, which is compiled from multiple macroscopic images of wood datasets. Our method is applied to established architectures like Visual Geometry Group (VGG) and Residual Network (ResNet). The results demonstrate significant improvements. The model reduced overfitting while maintaining, or even improving, the accuracy of the network’s predictions on classification tasks. This validates the effectiveness of our approach in enhancing the performance and generalization capabilities of deep learning models.
-
Analysis of the physics-informed neural network approach to solving ordinary differential equations
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1621-1636Considered the application of physics-informed neural networks using multi layer perceptrons to solve Cauchy initial value problems in which the right-hand sides of the equation are continuous monotonically increasing, decreasing or oscillating functions. With the use of the computational experiments the influence of the construction of the approximate neural network solution, neural network structure, optimization algorithm and software implementation means on the learning process and the accuracy of the obtained solution is studied. The analysis of the efficiency of the most frequently used machine learning frameworks in software development with the programming languages Python and C# is carried out. It is shown that the use of C# language allows to reduce the time of neural networks training by 20–40%. The choice of different activation functions affects the learning process and the accuracy of the approximate solution. The most effective functions in the considered problems are sigmoid and hyperbolic tangent. The minimum of the loss function is achieved at the certain number of neurons of the hidden layer of a single-layer neural network for a fixed training time of the neural network model. It’s also mentioned that the complication of the network structure increasing the number of neurons does not improve the training results. At the same time, the size of the grid step between the points of the training sample, providing a minimum of the loss function, is almost the same for the considered Cauchy problems. Training single-layer neural networks, the Adam method and its modifications are the most effective to solve the optimization problems. Additionally, the application of twoand three-layer neural networks is considered. It is shown that in these cases it is reasonable to use the LBFGS algorithm, which, in comparison with the Adam method, in some cases requires much shorter training time achieving the same solution accuracy. The specificity of neural network training for Cauchy problems in which the solution is an oscillating function with monotonically decreasing amplitude is also investigated. For these problems, it is necessary to construct a neural network solution with variable weight coefficient rather than with constant one, which improves the solution in the grid cells located near by the end point of the solution interval.
-
Prediction of frequency resource occupancy in a cognitive radio system using the Kolmogorov – Arnold neural network
Computer Research and Modeling, 2025, v. 17, no. 1, pp. 109-123For cognitive radio systems, it is important to use efficient algorithms that search for free channels that can be provided to secondary users. Therefore, this paper is devoted to improving the accuracy of prediction frequency resource occupancy of a cellular communication system using spatiotemporal radio environment maps. The formation of a radio environment map is implemented for the fourthgeneration cellular communication system Long-Term Evolution. Taking this into account, a model structure has been developed that includes data generation and allows training and testing of an artificial neural network to predict the occupancy of frequency resources presented as the contents of radio environment map cells. A method for assessing prediction accuracy is described. The simulation model of the cellular communication system is implemented in the MatLab. The developed frequency resource occupancy prediction model is implemented in the Python. The complete file structure of the model is presented. The experiments were performed using artificial neural networks based on the Long Short-Term Memory and Kolmogorov – Arnold neural network architectures, taking into account its modification. It was found that with an equal number of parameters, the Kolmogorov –Arnold neural network learns faster for a given task. The obtained research results indicate an increase in the accuracy of prediction the occupancy of the frequency resource of the cellular communication system when using the Kolmogorov – Arnold neural network.
-
Detecting large fractures in geological media using convolutional neural networks
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 889-901This paper considers the inverse problem of seismic exploration — determining the structure of the media based on the recorded wave response from it. Large cracks are considered as target objects, whose size and position are to be determined.
he direct problem is solved using the grid-characteristic method. The method allows using physically based algorithms for calculating outer boundaries of the region and contact boundaries inside the region. The crack is assumed to be thin, a special condition on the crack borders is used to describe the crack.
The inverse problem is solved using convolutional neural networks. The input data of the neural network are seismograms interpreted as images. The output data are masks describing the medium on a structured grid. Each element of such a grid belongs to one of two classes — either an element of a continuous geological massif, or an element through which a crack passes. This approach allows us to consider a medium with an unknown number of cracks.
The neural network is trained using only samples with one crack. The final testing of the trained network is performed using additional samples with several cracks. These samples are not involved in the training process. The purpose of testing under such conditions is to verify that the trained network has sufficient generality, recognizes signs of a crack in the signal, and does not suffer from overtraining on samples with a single crack in the media.
The paper shows that a convolutional network trained on samples with a single crack can be used to process data with multiple cracks. The networks detects fairly small cracks at great depths if they are sufficiently spatially separated from each other. In this case their wave responses are clearly distinguishable on the seismogram and can be interpreted by the neural network. If the cracks are close to each other, artifacts and interpretation errors may occur. This is due to the fact that on the seismogram the wave responses of close cracks merge. This cause the network to interpret several cracks located nearby as one. It should be noted that a similar error would most likely be made by a human during manual interpretation of the data. The paper provides examples of some such artifacts, distortions and recognition errors.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




