All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Application of the streamline method for nonlinear filtration problems acceleration
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 709-728Views (last year): 18.The paper contains numerical simulation of nonisothermal nonlinear flow in a porous medium. Twodimensional unsteady problem of heavy oil, water and steam flow is considered. Oil phase consists of two pseudocomponents: light and heavy fractions, which like the water component, can vaporize. Oil exhibits viscoplastic rheology, its filtration does not obey Darcy's classical linear law. Simulation considers not only the dependence of fluids density and viscosity on temperature, but also improvement of oil rheological properties with temperature increasing.
To solve this problem numerically we use streamline method with splitting by physical processes, which consists in separating the convective heat transfer directed along filtration from thermal conductivity and gravitation. The article proposes a new approach to streamline methods application, which allows correctly simulate nonlinear flow problems with temperature-dependent rheology. The core of this algorithm is to consider the integration process as a set of quasi-equilibrium states that are results of solving system on a global grid. Between these states system solved on a streamline grid. Usage of the streamline method allows not only to accelerate calculations, but also to obtain a physically reliable solution, since integration takes place on a grid that coincides with the fluid flow direction.
In addition to the streamline method, the paper presents an algorithm for nonsmooth coefficients accounting, which arise during simulation of viscoplastic oil flow. Applying this algorithm allows keeping sufficiently large time steps and does not change the physical structure of the solution.
Obtained results are compared with known analytical solutions, as well as with the results of commercial package simulation. The analysis of convergence tests on the number of streamlines, as well as on different streamlines grids, justifies the applicability of the proposed algorithm. In addition, the reduction of calculation time in comparison with traditional methods demonstrates practical significance of the approach.
-
Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327Views (last year): 16.The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.
-
Numerical study of intense shock waves in dusty media with a homogeneous and two-component carrier phase
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 141-154The article is devoted to the numerical study of shock-wave flows in inhomogeneous media–gas mixtures. In this work, a two-speed two-temperature model is used, in which the dispersed component of the mixture has its own speed and temperature. To describe the change in the concentration of the dispersed component, the equation of conservation of “average density” is solved. This study took into account interphase thermal interaction and interphase pulse exchange. The mathematical model allows the carrier component of the mixture to be described as a viscous, compressible and heat-conducting medium. The system of equations was solved using the explicit Mac-Cormack second-order finite-difference method. To obtain a monotone numerical solution, a nonlinear correction scheme was applied to the grid function. In the problem of shock-wave flow, the Dirichlet boundary conditions were specified for the velocity components, and the Neumann boundary conditions were specified for the other unknown functions. In numerical calculations, in order to reveal the dependence of the dynamics of the entire mixture on the properties of the solid component, various parameters of the dispersed phase were considered — the volume content as well as the linear size of the dispersed inclusions. The goal of the research was to determine how the properties of solid inclusions affect the parameters of the dynamics of the carrier medium — gas. The motion of an inhomogeneous medium in a shock duct divided into two parts was studied, the gas pressure in one of the channel compartments is more important than in the other. The article simulated the movement of a direct shock wave from a high-pressure chamber to a low–pressure chamber filled with a dusty medium and the subsequent reflection of a shock wave from a solid surface. An analysis of numerical calculations showed that a decrease in the linear particle size of the gas suspension and an increase in the physical density of the material from which the particles are composed leads to the formation of a more intense reflected shock wave with a higher temperature and gas density, as well as a lower speed of movement of the reflected disturbance reflected wave.
-
Model for building of the radio environment map for cognitive communication system based on LTE
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 127-146The paper is devoted to the secondary use of spectrum in telecommunication networks. It is emphasized that one of the solutions to this problem is the use of cognitive radio technologies and dynamic spectrum access for the successful functioning of which a large amount of information is required, including the parameters of base stations and network subscribers. Storage and processing of information should be carried out using a radio environment map, which is a spatio-temporal database of all activity in the network and allows you to determine the frequencies available for use at a given time. The paper presents a two-level model for forming a map of the radio environment of a cellular communication system LTE, in which the local and global levels are highlighted, which is described by the following parameters: a set of frequencies, signal attenuation, signal propagation map, grid step, current time count. The key objects of the model are the base station and the subscriber unit. The main parameters of the base station include: name, identifier, cell coordinates, range number, radiation power, numbers of connected subscriber devices, dedicated resource blocks. For subscriber devices, the following parameters are used: name, identifier, location, current coordinates of the device cell, base station identifier, frequency range, numbers of resource blocks for communication with the station, radiation power, data transmission status, list of numbers of the nearest stations, schedules movement and communication sessions of devices. An algorithm for the implementation of the model is presented, taking into account the scenarios of movement and communication sessions of subscriber devices. A method for calculating a map of the radio environment at a point on a coordinate grid, taking into account losses during the propagation of radio signals from emitting devices, is presented. The software implementation of the model is performed using the MatLab package. The approaches are described that allow to increase the speed of its work. In the simulation, the choice of parameters was carried out taking into account the data of the existing communication systems and the economy of computing resources. The experimental results of the algorithm for the formation of a radio environment map are demonstrated, confirming the correctness of the developed model.
-
Numerical study of Taylor – Cuetta turbulent flow
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 395-408In this paper, the turbulent Taylor – Couette flow is investigated using two-dimensional modeling based on the averaged Navier – Stokes (RANS) equations and a new two-fluid approach to turbulence at Reynolds numbers in the range from 1000 to 8000. The flow due to a rotating internal and stationary external cylinders. The case of ratio of cylinder diameters 1:2 is considered. It is known that the emerging circular flow is characterized by anisotropic turbulence and mathematical modeling of such flows is a difficult task. To describe such flows, either direct modeling methods are used, which require large computational costs, or rather laborious Reynolds stress methods, or linear RANS models with special corrections for rotation, which are able to describe anisotropic turbulence. In order to compare different approaches to turbulence modeling, the paper presents the numerical results of linear RANS models SARC, SST-RC, Reynolds stress method SSG/LRR-RSM-w2012, DNS direct turbulence modeling, as well as a new two-fluid model. It is shown that the recently developed twofluid model adequately describes the considered flow. In addition, the two-fluid model is easy to implement numerically and has good convergence.
-
Numerical model of transport in problems of instabilities of the Earth’s low-latitude ionosphere using a two-dimensional monotonized Z-scheme
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1011-1023The aim of the work is to study a monotone finite-difference scheme of the second order of accuracy, created on the basis of a generalization of the one-dimensional Z-scheme. The study was carried out for model equations of the transfer of an incompressible medium. The paper describes a two-dimensional generalization of the Z-scheme with nonlinear correction, using instead of streams oblique differences containing values from different time layers. The monotonicity of the obtained nonlinear scheme is verified numerically for the limit functions of two types, both for smooth solutions and for nonsmooth solutions, and numerical estimates of the order of accuracy of the constructed scheme are obtained.
The constructed scheme is absolutely stable, but it loses the property of monotony when the Courant step is exceeded. A distinctive feature of the proposed finite-difference scheme is the minimality of its template. The constructed numerical scheme is intended for models of plasma instabilities of various scales in the low-latitude ionospheric plasma of the Earth. One of the real problems in the solution of which such equations arise is the numerical simulation of highly nonstationary medium-scale processes in the earth’s ionosphere under conditions of the appearance of the Rayleigh – Taylor instability and plasma structures with smaller scales, the generation mechanisms of which are instabilities of other types, which leads to the phenomenon F-scattering. Due to the fact that the transfer processes in the ionospheric plasma are controlled by the magnetic field, it is assumed that the plasma incompressibility condition is fulfilled in the direction transverse to the magnetic field.
-
Simulation results of field experiments on the creation of updrafts for the development of artificial clouds and precipitation
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 941-956A promising method of increasing precipitation in arid climates is the method of creating a vertical high-temperature jet seeded by hygroscopic aerosol. Such an installation makes it possible to create artificial clouds with the possibility of precipitation formation in a cloudless atmosphere, unlike traditional methods of artificial precipitation enhancement, which provide for increasing the efficiency of precipitation formation only in natural clouds by seeding them with nuclei of crystallization and condensation. To increase the power of the jet, calcium chloride, carbamide, salt in the form of a coarse aerosol, as well as NaCl/TiO2 core/shell novel nanopowder, which is capable of condensing much more water vapor than the listed types of aerosols, are added. Dispersed inclusions in the jet are also centers of crystallization and condensation in the created cloud to increase the possibility of precipitation. To simulate convective flows in the atmosphere, a mathematical model of FlowVision large-scale atmospheric flows is used, the solution of the equations of motion, energy and mass transfer is carried out in relative variables. The statement of the problem is divided into two parts: the initial jet model and the FlowVision large-scale atmospheric model. The lower region, where the initial high-speed jet flows, is calculated using a compressible formulation with the solution of the energy equation with respect to the total enthalpy. This division of the problem into two separate subdomains is necessary in order to correctly carry out the numerical calculation of the initial turbulent jet at high velocity (M > 0.3). The main mathematical dependencies of the model are given. Numerical experiments were carried out using the presented model, experimental data from field tests of the installation for creating artificial clouds were taken for the initial data. A good agreement with the experiment is obtained: in 55% of the calculations carried out, the value of the vertical velocity at a height of 400 m (more than 2 m/s) and the height of the jet rise (more than 600 m) is within an deviation of 30% of the experimental characteristics, and in 30% of the calculations it is completely consistent with the experiment. The results of numerical simulation allow evaluating the possibility of using the high-speed jet method to stimulate artificial updrafts and to create precipitation. The calculations were carried out using FlowVision CFD software on SUSU Tornado supercomputer.
Keywords: artificial clouds, numerical simulation, CFD, artificial precipitation, meteorology, jet, meteotron. -
On the boundaries of optimally designed elastoplastic structures
Computer Research and Modeling, 2017, v. 9, no. 3, pp. 503-515Views (last year): 8.This paper studies minimum volume elastoplastic bodies. One part of the boundary of every reviewed body is fixed to the same space points while stresses are set for the remaining part of the boundary surface (loaded surface). The shape of the loaded surface can change in space but the limit load factor calculated based on the assumption that the bodies are filled with elastoplastic medium must not be less than a fixed value. Besides, all varying bodies are supposed to have some type of a limited volume sample manifold inside of them.
The following problem has been set: what is the maximum number of cavities (or holes in a two-dimensional case) that a minimum volume body (plate) can have under the above limitations? It is established that in order to define a mathematically correct problem, two extra conditions have to be met: the areas of the holes must be bigger than the small constant while the total length of the internal hole contour lines within the optimum figure must be minimum among the varying bodies. Thus, unlike most articles on optimum design of elastoplastic structures where parametric analysis of acceptable solutions is done with the set topology, this paper looks for the topological parameter of the design connectivity.
The paper covers the case when the load limit factor for the sample manifold is quite large while the areas of acceptable holes in the varying plates are bigger than the small constant. The arguments are brought forward that prove the Maxwell and Michell beam system to be the optimum figure under these conditions. As an example, microphotographs of the standard biological bone tissues are presented. It is demonstrated that internal holes with large areas cannot be a part of the Michell system. At the same the Maxwell beam system can include holes with significant areas. The sufficient conditions are given for the hole formation within the solid plate of optimum volume. The results permit generalization for three-dimensional elastoplastic structures.
The paper concludes with the setting of mathematical problems arising from the new problem optimally designed elastoplastic systems.
-
Estimation of maximal values of biomass growth yield based on the mass-energy balance of cell metabolism
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 723-750Views (last year): 2.The biomass growth yield is the ratio of the newly synthesized substance of growing cells to the amount of the consumed substrate, the source of matter and energy for cell growth. The yield is a characteristic of the efficiency of substrate conversion to cell biomass. The conversion is carried out by the cell metabolism, which is a complete aggregate of biochemical reactions occurring in the cells.
This work newly considers the problem of maximal cell growth yield prediction basing on balances of the whole living cell metabolism and its fragments called as partial metabolisms (PM). The following PM’s are used for the present consideration. During growth on any substrate we consider i) the standard constructive metabolism (SCM) which consists of identical pathways during growth of various organisms on any substrate. SCM starts from several standard compounds (nodal metabolites): glucose, acetyl-CoA 2-oxoglutarate, erythrose-4-phosphate, oxaloacetate, ribose-5- phosphate, 3-phosphoglycerate, phosphoenolpyruvate, and pyruvate, and ii) the full forward metabolism (FM) — the remaining part of the whole metabolism. The first one consumes high-energy bonds (HEB) formed by the second one. In this work we examine a generalized variant of the FM, when the possible presence of extracellular products, as well as the possibilities of both aerobic and anaerobic growth are taken into account. Instead of separate balances of each nodal metabolite formation as it was made in our previous work, this work deals at once with the whole aggregate of these metabolites. This makes the problem solution more compact and requiring a smaller number of biochemical quantities and substantially less computational time. An equation expressing the maximal biomass yield via specific amounts of HEB formed and consumed by the partial metabolisms has been derived. It includes the specific HEB consumption by SCM which is a universal biochemical parameter applicable to the wide range of organisms and growth substrates. To correctly determine this parameter, the full constructive metabolism and its forward part are considered for the growth of cells on glucose as the mostly studied substrate. We used here the found earlier properties of the elemental composition of lipid and lipid-free fractions of cell biomass. Numerical study of the effect of various interrelations between flows via different nodal metabolites has been made. It showed that the requirements of the SCM in high-energy bonds and NAD(P)H are practically constants. The found HEB-to-formed-biomass coefficient is an efficient tool for finding estimates of maximal biomass yield from substrates for which the primary metabolism is known. Calculation of ATP-to-substrate ratio necessary for the yield estimation has been made using the special computer program package, GenMetPath.
-
A neural network model for traffic signs recognition in intelligent transport systems
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 429-435This work analyzes the problem of traffic signs recognition in intelligent transport systems. The basic concepts of computer vision and image recognition tasks are considered. The most effective approach for solving the problem of analyzing and recognizing images now is the neural network method. Among all kinds of neural networks, the convolutional neural network has proven itself best. Activation functions such as Relu and SoftMax are used to solve the classification problem when recognizing traffic signs. This article proposes a technology for recognizing traffic signs. The choice of an approach for solving the problem based on a convolutional neural network due to the ability to effectively solve the problem of identifying essential features and classification. The initial data for the neural network model were prepared and a training sample was formed. The Google Colaboratory cloud service with the external libraries for deep learning TensorFlow and Keras was used as a platform for the intelligent system development. The convolutional part of the network is designed to highlight characteristic features in the image. The first layer includes 512 neurons with the Relu activation function. Then there is the Dropout layer, which is used to reduce the effect of overfitting the network. The output fully connected layer includes four neurons, which corresponds to the problem of recognizing four types of traffic signs. An intelligent traffic sign recognition system has been developed and tested. The used convolutional neural network included four stages of convolution and subsampling. Evaluation of the efficiency of the traffic sign recognition system using the three-block cross-validation method showed that the error of the neural network model is minimal, therefore, in most cases, new images will be recognized correctly. In addition, the model has no errors of the first kind, and the error of the second kind has a low value and only when the input image is very noisy.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"