All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Application of computational simulation techniques for designing swim-out release systems
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 597-606The article describes the basic approaches of the calculation procedure of payload swim-out (objects of different function with own propulsor) from the underwater carrier a method of a self-exit using modern CFD technologies. It contains the description of swim-out by a self-exit method, its advantages and disadvantages. Also it contains results of research of convergence on a grid of a final-volume model with accuracy-time criterion, and results of comparison of calculation with experiment (validation of models). Validation of models was carried out using the available data of experimental definition of traction characteristics of water-jet propulsor of the natural sample in the development pool. Calculations of traction characteristics of water-jet propulsor were carried out via software package FlowVision ver. 3.10. On the basis of comparison of results of calculations for conditions of carrying out of experiments the error of water-jet propulsor calculated model which has made no more than 5% in a range of advance coefficient water-jet propulsor, realised in the process of swim-out by a selfexit method has been defined. The received value of an error of calculation of traction characteristics is used for definition of limiting settlement values of speed of branch of object from the carrier (the minimum and maximum values). The considered problem is significant from the scientific point of view thanks to features of the approach to modelling hydrojet moving system together with movement of separated object, and also from the practical point of view, thanks to possibility of reception with high degree of reliability of parametres swim-out of objects from sea bed vehicles a method of the self-exit which working conditions are assumed by movement in the closed volumes, already on a design stage.
-
Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.
-
The dynamic model of a high-rise firefighting drone
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 115-126The utilization of unmanned aerial vehicles (UAVs) in high-rise firefighting operations is the right solution for reaching the fire scene on high floors quickly and effectively. The article proposes a quadrotor-type firefighting UAV model carrying a launcher to launch a missile containing fire extinguishing powders into a fire. The kinematic model describing the flight kinematics of this UAV model is built based on the Newton – Euler method when the device is in normal motion and at the time of launching a firefighting missile. The results from the simulation testing the validity of the kinematic model and the simulation of the motion of the UAV show that the variation of Euler angles, flight angles, and aerodynamic angles during a flight are within an acceptable range and overload guarantee in flight. The UAV flew to the correct position to launch the required fire-extinguishing ammunition. The results of the research are the basis for building a control system of high-rise firefighting drones in Vietnam.
-
Centrifugal pump modeling in FlowVision CFD software
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 907-919This paper presents a methodology for modeling centrifugal pumps using the example of the NM 1250 260 main oil centrifugal pump. We use FlowVision CFD software as the numerical modeling instrument. Bench tests and numerical modeling use water as a working fluid. The geometrical model of the pump is fully three-dimensional and includes the pump housing to account for leakages. In order to reduce the required computational resources, the methodology specifies leakages using flow rate rather than directly modeling them. Surface roughness influences flow through the wall function model. The wall function model uses an equivalent sand roughness, and a formula for converting real roughness into equivalent sand roughness is applied in this work. FlowVision uses the sliding mesh method for simulation of the rotation of the impeller. This approach takes into account the nonstationary interaction between the rotor and diffuser of the pump, allowing for accurate resolution of recirculation vortices that occur at low flow rates.
The developed methodology has achieved high consistency between numerical simulations results and experiments at all pump operating conditions. The deviation in efficiency at nominal conditions is 0.42%, and in head is 1.9%. The deviation of calculated characteristics from experimental ones increases as the flow rate increases and reaches a maximum at the far-right point of the characteristic curve (up to 4.8% in head). This phenomenon occurs due to a slight mismatch between the geometric model of the impeller used in the calculation and the real pump model from the experiment. However, the average arithmetic relative deviation between numerical modeling and experiment for pump efficiency at 6 points is 0.39%, with an experimental efficiency measurement error of 0.72%. This meets the accuracy requirements for calculations. In the future, this methodology can be used for a series of optimization and strength calculations, as modeling does not require significant computational resources and takes into account the non-stationary nature of flow in the pump.
Keywords: FlowVision, CFD, centrifugal pump, impeller, performance characteristics, roughness, leakage. -
Image classification based on deep learning with automatic relevance determination and structured Bayesian pruning
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 927-938Deep learning’s power stems from complex architectures; however, these can lead to overfitting, where models memorize training data and fail to generalize to unseen examples. This paper proposes a novel probabilistic approach to mitigate this issue. We introduce two key elements: Truncated Log-Uniform Prior and Truncated Log-Normal Variational Approximation, and Automatic Relevance Determination (ARD) with Bayesian Deep Neural Networks (BDNNs). Within the probabilistic framework, we employ a specially designed truncated log-uniform prior for noise. This prior acts as a regularizer, guiding the learning process towards simpler solutions and reducing overfitting. Additionally, a truncated log-normal variational approximation is used for efficient handling of the complex probability distributions inherent in deep learning models. ARD automatically identifies and removes irrelevant features or weights within a model. By integrating ARD with BDNNs, where weights have a probability distribution, we achieve a variational bound similar to the popular variational dropout technique. Dropout randomly drops neurons during training, encouraging the model not to rely heavily on any single feature. Our approach with ARD achieves similar benefits without the randomness of dropout, potentially leading to more stable training.
To evaluate our approach, we have tested the model on two datasets: the Canadian Institute For Advanced Research (CIFAR-10) for image classification and a dataset of Macroscopic Images of Wood, which is compiled from multiple macroscopic images of wood datasets. Our method is applied to established architectures like Visual Geometry Group (VGG) and Residual Network (ResNet). The results demonstrate significant improvements. The model reduced overfitting while maintaining, or even improving, the accuracy of the network’s predictions on classification tasks. This validates the effectiveness of our approach in enhancing the performance and generalization capabilities of deep learning models.
-
Prediction of frequency resource occupancy in a cognitive radio system using the Kolmogorov – Arnold neural network
Computer Research and Modeling, 2025, v. 17, no. 1, pp. 109-123For cognitive radio systems, it is important to use efficient algorithms that search for free channels that can be provided to secondary users. Therefore, this paper is devoted to improving the accuracy of prediction frequency resource occupancy of a cellular communication system using spatiotemporal radio environment maps. The formation of a radio environment map is implemented for the fourthgeneration cellular communication system Long-Term Evolution. Taking this into account, a model structure has been developed that includes data generation and allows training and testing of an artificial neural network to predict the occupancy of frequency resources presented as the contents of radio environment map cells. A method for assessing prediction accuracy is described. The simulation model of the cellular communication system is implemented in the MatLab. The developed frequency resource occupancy prediction model is implemented in the Python. The complete file structure of the model is presented. The experiments were performed using artificial neural networks based on the Long Short-Term Memory and Kolmogorov – Arnold neural network architectures, taking into account its modification. It was found that with an equal number of parameters, the Kolmogorov –Arnold neural network learns faster for a given task. The obtained research results indicate an increase in the accuracy of prediction the occupancy of the frequency resource of the cellular communication system when using the Kolmogorov – Arnold neural network.
-
Technique for analyzing noise-induced phenomena in two-component stochastic systems of reaction – diffusion type with power nonlinearity
Computer Research and Modeling, 2025, v. 17, no. 2, pp. 277-291The paper constructs and studies a generalized model describing two-component systems of reaction – diffusion type with power nonlinearity, considering the influence of external noise. A methodology has been developed for analyzing the generalized model, which includes linear stability analysis, nonlinear stability analysis, and numerical simulation of the system’s evolution. The linear analysis technique uses basic approaches, in which the characteristic equation is obtained using a linearization matrix. Nonlinear stability analysis realized up to third-order moments inclusively. For this, the functions describing the dynamics of the components are expanded in Taylor series up to third-order terms. Then, using the Novikov theorem, the averaging procedure is carried out. As a result, the obtained equations form an infinite hierarchically subordinate structure, which must be truncated at some point. To achieve this, contributions from terms higher than the third order are neglected in both the equations themselves and during the construction of the moment equations. The resulting equations form a set of linear equations, from which the stability matrix is constructed. This matrix has a rather complex structure, making it solvable only numerically. For the numerical study of the system’s evolution, the method of variable directions was chosen. Due to the presence of a stochastic component in the analyzed system, the method was modified such that random fields with a specified distribution and correlation function, responsible for the noise contribution to the overall nonlinearity, are generated across entire layers. The developed methodology was tested on the reaction – diffusion model proposed by Barrio et al., according to the results of the study, they showed the similarity of the obtained structures with the pigmentation of fish. This paper focuses on the system behavior analysis in the neighborhood of a non-zero stationary point. The dependence of the real part of the eigenvalues on the wavenumber has been examined. In the linear analysis, a range of wavenumber values is identified in which Turing instability occurs. Nonlinear analysis and numerical simulation of the system’s evolution are conducted for model parameters that, in contrast, lie outside the Turing instability region. Nonlinear analysis found noise intensities of additive noise for which, despite the absence of conditions for the emergence of diffusion instability, the system transitions to an unstable state. The results of the numerical simulation of the evolution of the tested model demonstrate the process of forming spatial structures of Turing type under the influence of additive noise.
-
Detecting large fractures in geological media using convolutional neural networks
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 889-901This paper considers the inverse problem of seismic exploration — determining the structure of the media based on the recorded wave response from it. Large cracks are considered as target objects, whose size and position are to be determined.
he direct problem is solved using the grid-characteristic method. The method allows using physically based algorithms for calculating outer boundaries of the region and contact boundaries inside the region. The crack is assumed to be thin, a special condition on the crack borders is used to describe the crack.
The inverse problem is solved using convolutional neural networks. The input data of the neural network are seismograms interpreted as images. The output data are masks describing the medium on a structured grid. Each element of such a grid belongs to one of two classes — either an element of a continuous geological massif, or an element through which a crack passes. This approach allows us to consider a medium with an unknown number of cracks.
The neural network is trained using only samples with one crack. The final testing of the trained network is performed using additional samples with several cracks. These samples are not involved in the training process. The purpose of testing under such conditions is to verify that the trained network has sufficient generality, recognizes signs of a crack in the signal, and does not suffer from overtraining on samples with a single crack in the media.
The paper shows that a convolutional network trained on samples with a single crack can be used to process data with multiple cracks. The networks detects fairly small cracks at great depths if they are sufficiently spatially separated from each other. In this case their wave responses are clearly distinguishable on the seismogram and can be interpreted by the neural network. If the cracks are close to each other, artifacts and interpretation errors may occur. This is due to the fact that on the seismogram the wave responses of close cracks merge. This cause the network to interpret several cracks located nearby as one. It should be noted that a similar error would most likely be made by a human during manual interpretation of the data. The paper provides examples of some such artifacts, distortions and recognition errors.
-
On one particular model of a mixture of the probability distributions in the radio measurements
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 563-568Views (last year): 3. Citations: 7 (RSCI).This paper presents a model mixture of probability distributions of signal and noise. Typically, when analyzing the data under conditions of uncertainty it is necessary to use nonparametric tests. However, such an analysis of nonstationary data in the presence of uncertainty on the mean of the distribution and its parameters may be ineffective. The model involves the implementation of a case of a priori non-parametric uncertainty in the processing of the signal at a time when the separation of signal and noise are related to different general population, is feasible.
-
Application of the streamline method for nonlinear filtration problems acceleration
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 709-728Views (last year): 18.The paper contains numerical simulation of nonisothermal nonlinear flow in a porous medium. Twodimensional unsteady problem of heavy oil, water and steam flow is considered. Oil phase consists of two pseudocomponents: light and heavy fractions, which like the water component, can vaporize. Oil exhibits viscoplastic rheology, its filtration does not obey Darcy's classical linear law. Simulation considers not only the dependence of fluids density and viscosity on temperature, but also improvement of oil rheological properties with temperature increasing.
To solve this problem numerically we use streamline method with splitting by physical processes, which consists in separating the convective heat transfer directed along filtration from thermal conductivity and gravitation. The article proposes a new approach to streamline methods application, which allows correctly simulate nonlinear flow problems with temperature-dependent rheology. The core of this algorithm is to consider the integration process as a set of quasi-equilibrium states that are results of solving system on a global grid. Between these states system solved on a streamline grid. Usage of the streamline method allows not only to accelerate calculations, but also to obtain a physically reliable solution, since integration takes place on a grid that coincides with the fluid flow direction.
In addition to the streamline method, the paper presents an algorithm for nonsmooth coefficients accounting, which arise during simulation of viscoplastic oil flow. Applying this algorithm allows keeping sufficiently large time steps and does not change the physical structure of the solution.
Obtained results are compared with known analytical solutions, as well as with the results of commercial package simulation. The analysis of convergence tests on the number of streamlines, as well as on different streamlines grids, justifies the applicability of the proposed algorithm. In addition, the reduction of calculation time in comparison with traditional methods demonstrates practical significance of the approach.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




