All issues
- 2026 Vol. 18
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Image classification based on deep learning with automatic relevance determination and structured Bayesian pruning
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 927-938Deep learning’s power stems from complex architectures; however, these can lead to overfitting, where models memorize training data and fail to generalize to unseen examples. This paper proposes a novel probabilistic approach to mitigate this issue. We introduce two key elements: Truncated Log-Uniform Prior and Truncated Log-Normal Variational Approximation, and Automatic Relevance Determination (ARD) with Bayesian Deep Neural Networks (BDNNs). Within the probabilistic framework, we employ a specially designed truncated log-uniform prior for noise. This prior acts as a regularizer, guiding the learning process towards simpler solutions and reducing overfitting. Additionally, a truncated log-normal variational approximation is used for efficient handling of the complex probability distributions inherent in deep learning models. ARD automatically identifies and removes irrelevant features or weights within a model. By integrating ARD with BDNNs, where weights have a probability distribution, we achieve a variational bound similar to the popular variational dropout technique. Dropout randomly drops neurons during training, encouraging the model not to rely heavily on any single feature. Our approach with ARD achieves similar benefits without the randomness of dropout, potentially leading to more stable training.
To evaluate our approach, we have tested the model on two datasets: the Canadian Institute For Advanced Research (CIFAR-10) for image classification and a dataset of Macroscopic Images of Wood, which is compiled from multiple macroscopic images of wood datasets. Our method is applied to established architectures like Visual Geometry Group (VGG) and Residual Network (ResNet). The results demonstrate significant improvements. The model reduced overfitting while maintaining, or even improving, the accuracy of the network’s predictions on classification tasks. This validates the effectiveness of our approach in enhancing the performance and generalization capabilities of deep learning models.
-
Model of steady river flow in the cross section of a curved channel
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1163-1178Modeling of channel processes in the study of coastal channel deformations requires the calculation of hydrodynamic flow parameters that take into account the existence of secondary transverse currents formed at channel curvature. Three-dimensional modeling of such processes is currently possible only for small model channels; for real river flows, reduced-dimensional models are needed. At the same time, the reduction of the problem from a three-dimensional model of the river flow movement to a two-dimensional flow model in the cross-section assumes that the hydrodynamic flow under consideration is quasi-stationary and the hypotheses about the asymptotic behavior of the flow along the flow coordinate of the cross-section are fulfilled for it. Taking into account these restrictions, a mathematical model of the problem of the a stationary turbulent calm river flow movement in a channel cross-section is formulated. The problem is formulated in a mixed formulation of velocity — “vortex – stream function”. As additional conditions for problem reducing, it is necessary to specify boundary conditions on the flow free surface for the velocity field, determined in the normal and tangential direction to the cross-section axis. It is assumed that the values of these velocities should be determined from the solution of auxiliary problems or obtained from field or experimental measurement data.
To solve the formulated problem, the finite element method in the Petrov – Galerkin formulation is used. Discrete analogue of the problem is obtained and an algorithm for solving it is proposed. Numerical studies have shown that, in general, the results obtained are in good agreement with known experimental data. The authors associate the obtained errors with the need to more accurately determine the circulation velocities field at crosssection of the flow by selecting and calibrating a more appropriate model for calculating turbulent viscosity and boundary conditions at the free boundary of the cross-section.
-
Model of assimilation potential in lake ecosystem on the example of biogenic pollutants
Computer Research and Modeling, 2024, v. 16, no. 6, pp. 1447-1465A model of biogeochemical cycles for nutrient transformation in the ecosystem of a water body has been developed using the example of the Lake Teletskoye (TL) to assess its assimilation potential in the absence of direct measurements for total nitrogen and phosphorus concentrations, instead of which the corresponding simulated data. The validity is justified by checking the adequacy of the simulation results to the data of average monthly long-term observations for all variables of the state for model. The model was calibrated with taking into account data from observations of water quality in 1985–2003, as well as a scenario version of the hydrological regime in 2016. The analysis of the intra-annual changeability of state variables, nitrogen and phosphorus inputs and outputs in TL water is given. The preliminary values of the permissible load N and P on the lake is accessed. The model analysis showed that the lake has practically no assimilation potential with respect to phosphorus compounds. The corresponding values of concentrations are equal to: Ptot. = 0.013 gP/m3, which is equal to the average annual content over the period of 18-year observations. The threshold content of Ntot. = 0.895 gN/m3. The assimilation potential for nitrogen is small, within the second significant digit after the decimal point, bearing in mind that its simulated average annual value is 0.836 gN/m3. The results of simulation indicate that the TL waters, due to the low water temperatures, along with their unique purity, differ in an extremely poorly developed community of hydrobionts. In the case of other lakes, the increase of anthropogenic pressure could be mitigated by utilization due to the vital activity of sufficiently developed hydrobionts communities. Here, there is no sufficient self-purification resource, and a relatively small increase in anthropogenic load can lead to a violation of the sustainability.
-
Prediction of frequency resource occupancy in a cognitive radio system using the Kolmogorov – Arnold neural network
Computer Research and Modeling, 2025, v. 17, no. 1, pp. 109-123For cognitive radio systems, it is important to use efficient algorithms that search for free channels that can be provided to secondary users. Therefore, this paper is devoted to improving the accuracy of prediction frequency resource occupancy of a cellular communication system using spatiotemporal radio environment maps. The formation of a radio environment map is implemented for the fourthgeneration cellular communication system Long-Term Evolution. Taking this into account, a model structure has been developed that includes data generation and allows training and testing of an artificial neural network to predict the occupancy of frequency resources presented as the contents of radio environment map cells. A method for assessing prediction accuracy is described. The simulation model of the cellular communication system is implemented in the MatLab. The developed frequency resource occupancy prediction model is implemented in the Python. The complete file structure of the model is presented. The experiments were performed using artificial neural networks based on the Long Short-Term Memory and Kolmogorov – Arnold neural network architectures, taking into account its modification. It was found that with an equal number of parameters, the Kolmogorov –Arnold neural network learns faster for a given task. The obtained research results indicate an increase in the accuracy of prediction the occupancy of the frequency resource of the cellular communication system when using the Kolmogorov – Arnold neural network.
-
Detecting large fractures in geological media using convolutional neural networks
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 889-901This paper considers the inverse problem of seismic exploration — determining the structure of the media based on the recorded wave response from it. Large cracks are considered as target objects, whose size and position are to be determined.
he direct problem is solved using the grid-characteristic method. The method allows using physically based algorithms for calculating outer boundaries of the region and contact boundaries inside the region. The crack is assumed to be thin, a special condition on the crack borders is used to describe the crack.
The inverse problem is solved using convolutional neural networks. The input data of the neural network are seismograms interpreted as images. The output data are masks describing the medium on a structured grid. Each element of such a grid belongs to one of two classes — either an element of a continuous geological massif, or an element through which a crack passes. This approach allows us to consider a medium with an unknown number of cracks.
The neural network is trained using only samples with one crack. The final testing of the trained network is performed using additional samples with several cracks. These samples are not involved in the training process. The purpose of testing under such conditions is to verify that the trained network has sufficient generality, recognizes signs of a crack in the signal, and does not suffer from overtraining on samples with a single crack in the media.
The paper shows that a convolutional network trained on samples with a single crack can be used to process data with multiple cracks. The networks detects fairly small cracks at great depths if they are sufficiently spatially separated from each other. In this case their wave responses are clearly distinguishable on the seismogram and can be interpreted by the neural network. If the cracks are close to each other, artifacts and interpretation errors may occur. This is due to the fact that on the seismogram the wave responses of close cracks merge. This cause the network to interpret several cracks located nearby as one. It should be noted that a similar error would most likely be made by a human during manual interpretation of the data. The paper provides examples of some such artifacts, distortions and recognition errors.
-
Impact of spatial resolution on mobile robot path optimality in two-dimensional lattice models
Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1131-1148This paper examines the impact of the spatial resolution of a discretized (lattice) representation of the environment on the efficiency and correctness of optimal pathfinding in complex environments. Scenarios are considered that may include bottlenecks, non-uniform obstacle distributions, and areas of increased safety requirements in the immediate vicinity of obstacles. Despite the widespread use of lattice representations of the environment in robotics due to their compatibility with sensor data and support for classical trajectory planning algorithms, the resolution of these lattices has a significant impact on both goal reachability and optimal path performance. An algorithm is proposed that combines environmental connectivity analysis, trajectory optimization, and geometric safety refinement. In the first stage, the Leath algorithm is used to estimate the reachability of the target point by identifying a connected component containing the starting position. Upon confirmation of the target point’s reachability, the A* algorithm is applied to the nodes of this component in the second stage to construct a path that simultaneously minimizes both the path length and the risk of collision. In the third stage, a refined obstacle distance estimate is performed for nodes located in safety zones using a combination of the Gilbert – Johnson –Keerthi (GJK) and expanding polyhedron (EPA) algorithms. Experimental analysis revealed a nonlinear relationship between the probability of the existence and effectiveness of an optimal path and the lattice parameters. Specifically, reducing the spatial resolution of the lattice increases the likelihood of connectivity loss and target unreachability, while increasing its spatial resolution increases computational complexity without a proportional improvement in the optimal path’s performance.
-
Physics-assisted cascade neural network model for predicting pressure losses of a three-phase mixture in a pipeline
Computer Research and Modeling, 2026, v. 18, no. 1, pp. 117-131The paper presents a cascade model of a physically supported neural network designed to predict pressure drop in three-phase flow (oil, gas, water) in a pipe section with various angles of inclination. To overcome the constraints of existing empirical correlations and computation-intensive numerical modeling methods, we propose an architecture that decomposes the problem into three sequential physically interpretable subtasks: regression prediction of the fluid hold-up coefficient, fluid flow regime classification, and pressure gradient evaluation. Each subtask is solved by a separate fully connected neural network, the output of which is passed to the next model in the cascade. Training and testing of the proposed architecture was performed on an extensive synthetic dataset (8 · 107 records) generated using a semi-empirical model. Verification is performed on independent experimental data. A comparative analysis with a single fully connected (non-cascade) neural network is made, and the sensitivity of the models is examined using Sobol and Borgonovo methods. The cascade model demonstrates superior accuracy and ensures high interpretability of results by providing intermediate physical parameters (fluid hold-up coefficient, flow regime). The developed model has low computational complexity, which allows it to be used in real-time systems and digital twins of hydraulic systems in the oil and gas industry.
-
Calibration of diversity indexes and search for ecologically tolerable levels of abiotic factors (case study: water objects of the Don river)
Computer Research and Modeling, 2009, v. 1, no. 2, pp. 199-207Views (last year): 1.With the data obtained by hydrobiological monitoring of water objects of Don river for many years (1978-1988) calculation of rank distribution parameters and indexes of dominance for phytoplankton species abundance was conducted. The borders of investigated characteristics are calculated. They correspond to borders of ecological well-being - trouble conditions of phytoplankton communities. Ecologically tolerable levels for the core abiotic factors are found. Contribution of each of analyzed factors to a degree of ecological trouble is established.
-
Biological and physico-chemical data from natural objects for ecological environmental monitoring
Computer Research and Modeling, 2010, v. 2, no. 2, pp. 199-207Views (last year): 1. Citations: 9 (RSCI).Methods for establishing standards of environmental quality by data of ecological monitoring are proposed. These are: methods of bioindication by indices of species diversity and size structure of communities, by indices of fish productivity; method for searching for reasons of environmental trouble and ranking them by their contribution into the trouble; methods for standardization of factors which are important as causes of environmental trouble.
-
Optimization of integral estimation of bio-systems state using parallel calculation
Computer Research and Modeling, 2011, v. 3, no. 1, pp. 93-99Citations: 3 (RSCI).The approach to optimization of integral estimation of bio-systems state is presented. The approach is included the procedures of decreasing of variability of integral estimation based on statistical modeling of experimental data set and optimization the quantity of a state characteristics on a base of their relative contribution to the integral estimation using parallel calculation.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




