All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Cluster method of mathematical modeling of interval-stochastic thermal processes in electronic systems
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1023-1038A cluster method of mathematical modeling of interval-stochastic thermal processes in complex electronic systems (ES), is developed. In the cluster method, the construction of a complex ES is represented in the form of a thermal model, which is a system of clusters, each of which contains a core that combines the heat-generating elements falling into a given cluster, the cluster shell and a medium flow through the cluster. The state of the thermal process in each cluster and every moment of time is characterized by three interval-stochastic state variables, namely, the temperatures of the core, shell, and medium flow. The elements of each cluster, namely, the core, shell, and medium flow, are in thermal interaction between themselves and elements of neighboring clusters. In contrast to existing methods, the cluster method allows you to simulate thermal processes in complex ESs, taking into account the uneven distribution of temperature in the medium flow pumped into the ES, the conjugate nature of heat exchange between the medium flow in the ES, core and shells of clusters, and the intervalstochastic nature of thermal processes in the ES, caused by statistical technological variation in the manufacture and installation of electronic elements in ES and random fluctuations in the thermal parameters of the environment. The mathematical model describing the state of thermal processes in a cluster thermal model is a system of interval-stochastic matrix-block equations with matrix and vector blocks corresponding to the clusters of the thermal model. The solution to the interval-stochastic equations are statistical measures of the state variables of thermal processes in clusters - mathematical expectations, covariances between state variables and variance. The methodology for applying the cluster method is shown on the example of a real ES.
-
Stochastic simulation of chemical reactions in subdiffusion medium
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 87-104Theory of anomalous diffusion, which describe a vast number of transport processes with power law mean squared displacement, is actively advancing in recent years. Diffusion of liquids in porous media, carrier transport in amorphous semiconductors and molecular transport in viscous environments are widely known examples of anomalous deceleration of transport processes compared to the standard model.
Direct Monte Carlo simulation is a convenient tool for studying such processes. An efficient stochastic simulation algorithm is developed in the present paper. It is based on simple renewal process with interarrival times that have power law asymptotics. Analytical derivations show a deep connection between this class of random process and equations with fractional derivatives. The algorithm is further generalized by coupling it with chemical reaction simulation. It makes stochastic approach especially useful, because the exact form of integrodifferential evolution equations for reaction — subdiffusion systems is still a matter of debates.
Proposed algorithm relies on non-markovian random processes, hence one should carefully account for qualitatively new effects. The main question is how molecules leave the system during chemical reactions. An exact scheme which tracks all possible molecule combinations for every reaction channel is computationally infeasible because of the huge number of such combinations. It necessitates application of some simple heuristic procedures. Choosing one of these heuristics greatly affects obtained results, as illustrated by a series of numerical experiments.
-
Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.
-
Simulation of the initial stage of a two-component rarefied gas mixture outflow through a thin slit into vacuum
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 747-759The paper considers the process of flow formation in an outflow of a binary gas mixture through a thin slit into vacuum. An approach to modeling the flows of rarefied gas mixtures in the transient regime is proposed based on the direct solution of the Boltzmann kinetic equation, in which the conservative projection method is used to calculate the collision integrals. Calculation formulas are provided; the calculation procedure is described in detail in relation to the flow of a binary gas mixture. The Lennard–Jones potential is used as an interaction potential of molecules. A software modeling environment has been developed that makes it possible to study the flows of gas mixtures in a transitional regime on systems of cluster architecture. Due to the use of code parallelization technologies, an acceleration of calculations by 50–100 times was obtained. Numerical simulation of a two-dimensional outflow of a binary argon-neon gas mixture from a vessel into vacuum through a thin slit is carried out for various values of the Knudsen number. The graphs of the dependence of gas mixture components output flow on time in the process of establishing the flow are obtained. Non-stationary regions of strong separation of gas mixture components, in which the molecular densities ratio reaches 10 or more, were discovered. The discovered effect can have applications in the problem of gas mixtures separation.
-
The dynamic model of a high-rise firefighting drone
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 115-126The utilization of unmanned aerial vehicles (UAVs) in high-rise firefighting operations is the right solution for reaching the fire scene on high floors quickly and effectively. The article proposes a quadrotor-type firefighting UAV model carrying a launcher to launch a missile containing fire extinguishing powders into a fire. The kinematic model describing the flight kinematics of this UAV model is built based on the Newton – Euler method when the device is in normal motion and at the time of launching a firefighting missile. The results from the simulation testing the validity of the kinematic model and the simulation of the motion of the UAV show that the variation of Euler angles, flight angles, and aerodynamic angles during a flight are within an acceptable range and overload guarantee in flight. The UAV flew to the correct position to launch the required fire-extinguishing ammunition. The results of the research are the basis for building a control system of high-rise firefighting drones in Vietnam.
-
A study of nonlinear processes at the interface between gas flow and the metal wall of a microchannel
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 781-794The work is devoted to the study of the influence of nonlinear processes in the boundary layer on the general nature of gas flows in microchannels of technical systems. Such a study is actually concerned with nanotechnology problems. One of the important problems in this area is the analysis of gas flows in microchannels in the case of transient and supersonic flows. The results of this analysis are important for the gas-dynamic spraying techique and for the synthesis of new nanomaterials. Due to the complexity of the implementation of full-scale experiments on micro- and nanoscale, they are most often replaced by computer simulations. The efficiency of computer simulations is achieved by both the use of new multiscale models and the combination of mesh and particle methods. In this work, we use the molecular dynamics method. It is applied to study the establishment of a gas microflow in a metal channel. Nitrogen was chosen as the gaseous medium. The metal walls of the microchannels consisted of nickel atoms. In numerical experiments, the accommodation coefficients were calculated at the boundary between the gas flow and the metal wall. The study of the microsystem in the boundary layer made it possible to form a multicomponent macroscopic model of the boundary conditions. This model was integrated into the macroscopic description of the flow based on a system of quasi-gas-dynamic equations. On the basis of such a transformed gas-dynamic model, calculations of microflow in real microsystem were carried out. The results were compared with the classical calculation of the flow, which does not take into account nonlinear processes in the boundary layer. The comparison showed the need to use the developed model of boundary conditions and its integration with the classical gas-dynamic approach.
-
Centrifugal pump modeling in FlowVision CFD software
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 907-919This paper presents a methodology for modeling centrifugal pumps using the example of the NM 1250 260 main oil centrifugal pump. We use FlowVision CFD software as the numerical modeling instrument. Bench tests and numerical modeling use water as a working fluid. The geometrical model of the pump is fully three-dimensional and includes the pump housing to account for leakages. In order to reduce the required computational resources, the methodology specifies leakages using flow rate rather than directly modeling them. Surface roughness influences flow through the wall function model. The wall function model uses an equivalent sand roughness, and a formula for converting real roughness into equivalent sand roughness is applied in this work. FlowVision uses the sliding mesh method for simulation of the rotation of the impeller. This approach takes into account the nonstationary interaction between the rotor and diffuser of the pump, allowing for accurate resolution of recirculation vortices that occur at low flow rates.
The developed methodology has achieved high consistency between numerical simulations results and experiments at all pump operating conditions. The deviation in efficiency at nominal conditions is 0.42%, and in head is 1.9%. The deviation of calculated characteristics from experimental ones increases as the flow rate increases and reaches a maximum at the far-right point of the characteristic curve (up to 4.8% in head). This phenomenon occurs due to a slight mismatch between the geometric model of the impeller used in the calculation and the real pump model from the experiment. However, the average arithmetic relative deviation between numerical modeling and experiment for pump efficiency at 6 points is 0.39%, with an experimental efficiency measurement error of 0.72%. This meets the accuracy requirements for calculations. In the future, this methodology can be used for a series of optimization and strength calculations, as modeling does not require significant computational resources and takes into account the non-stationary nature of flow in the pump.
Keywords: FlowVision, CFD, centrifugal pump, impeller, performance characteristics, roughness, leakage. -
Physical research and numerical modeling of the lower ionosphere perturbed by powerful radio emission. Part 2. Results of numerical calculations and their analysis
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1237-1262The second part presents numerical studies of the parameters of the lower ionosphere at altitudes of 40–90 km when heated by powerful high-frequency radio waves of various frequencies and powers. The problem statement is considered in the first part of the article. The main attention is paid to the interrelation between the energy and kinetic parameters of the disturbed $D$-region of the ionosphere in the processes that determine the absorption and transformation of the radio beam energy flux in space and time. The possibility of a significant difference in the behavior of the parameters of the disturbed region in the daytime and at nighttime, both in magnitude and in space-time distribution, is shown. In the absence of sufficiently reliable values of the rate constants for a number of important kinetic processes, numerical studies were carried out in stages with the gradual addition of individual processes and kinetic blocks corresponding at the same time to a certain physical content. It is shown that the energy thresholds for inelastic collisions of electrons with air molecules are the main ones. This approach made it possible to detect the effect of the emergence of a self-oscillating mode of changing parameters if the main channel for energy losses in inelastic processes is the most energy-intensive process — ionization. This effect may play a role in plasma studies using high-frequency inductive and capacitive discharges. The results of calculations of the ionization and optical parameters of the disturbed $D$-region for daytime conditions are presented. The electron temperature, density, emission coefficients in the visible and infrared ranges of the spectrum are obtained for various values of the power of the radio beam and its frequency in the lower ionosphere. The height-time distribution of the absorbed radiation power is calculated, which is necessary in studies of higher layers of the ionosphere. The influence on the electron temperature and on the general behavior of the parameters of energy losses by electrons on the excitation of vibrational and metastable states of molecules has been studied in detail. It is shown that under nighttime conditions, when the electron concentration begins at altitudes of about 80 km, and the concentration of heavy particles decreases by two orders of magnitude compared to the average $D$-region, large-scale gas-dynamic motion can develop with sufficient radio emission power The algorithm was developed based on the McCormack method and two-dimensional gas-dynamic calculations of the behavior of the parameters of the perturbed region were performed with some simplifications of the kinetics.
-
Cloud interpretation of the entropy model for calculating the trip matrix
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 89-103As the population of cities grows, the need to plan for the development of transport infrastructure becomes more acute. For this purpose, transport modeling packages are created. These packages usually contain a set of convex optimization problems, the iterative solution of which leads to the desired equilibrium distribution of flows along the paths. One of the directions for the development of transport modeling is the construction of more accurate generalized models that take into account different types of passengers, their travel purposes, as well as the specifics of personal and public modes of transport that agents can use. Another important direction of transport models development is to improve the efficiency of the calculations performed. Since, due to the large dimension of modern transport networks, the search for a numerical solution to the problem of equilibrium distribution of flows along the paths is quite expensive. The iterative nature of the entire solution process only makes this worse. One of the approaches leading to a reduction in the number of calculations performed is the construction of consistent models that allow to combine the blocks of a 4-stage model into a single optimization problem. This makes it possible to eliminate the iterative running of blocks, moving from solving a separate optimization problem at each stage to some general problem. Early work has proven that such approaches provide equivalent solutions. However, it is worth considering the validity and interpretability of these methods. The purpose of this article is to substantiate a single problem, that combines both the calculation of the trip matrix and the modal choice, for the generalized case when there are different layers of demand, types of agents and classes of vehicles in the transport network. The article provides possible interpretations for the gauge parameters used in the problem, as well as for the dual factors associated with the balance constraints. The authors of the article also show the possibility of combining the considered problem with a block for determining network load into a single optimization problem.
-
Image classification based on deep learning with automatic relevance determination and structured Bayesian pruning
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 927-938Deep learning’s power stems from complex architectures; however, these can lead to overfitting, where models memorize training data and fail to generalize to unseen examples. This paper proposes a novel probabilistic approach to mitigate this issue. We introduce two key elements: Truncated Log-Uniform Prior and Truncated Log-Normal Variational Approximation, and Automatic Relevance Determination (ARD) with Bayesian Deep Neural Networks (BDNNs). Within the probabilistic framework, we employ a specially designed truncated log-uniform prior for noise. This prior acts as a regularizer, guiding the learning process towards simpler solutions and reducing overfitting. Additionally, a truncated log-normal variational approximation is used for efficient handling of the complex probability distributions inherent in deep learning models. ARD automatically identifies and removes irrelevant features or weights within a model. By integrating ARD with BDNNs, where weights have a probability distribution, we achieve a variational bound similar to the popular variational dropout technique. Dropout randomly drops neurons during training, encouraging the model not to rely heavily on any single feature. Our approach with ARD achieves similar benefits without the randomness of dropout, potentially leading to more stable training.
To evaluate our approach, we have tested the model on two datasets: the Canadian Institute For Advanced Research (CIFAR-10) for image classification and a dataset of Macroscopic Images of Wood, which is compiled from multiple macroscopic images of wood datasets. Our method is applied to established architectures like Visual Geometry Group (VGG) and Residual Network (ResNet). The results demonstrate significant improvements. The model reduced overfitting while maintaining, or even improving, the accuracy of the network’s predictions on classification tasks. This validates the effectiveness of our approach in enhancing the performance and generalization capabilities of deep learning models.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"