All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Research and reduction of mathematical model of chemical reaction by Sobol’ method
Computer Research and Modeling, 2016, v. 8, no. 4, pp. 633-646Views (last year): 10. Citations: 4 (RSCI).The technique of simplification of mathematical model of a chemical reaction by reducing the number of steps of the reaction scheme, based on an analysis of sensitivity to changes in the objective function of the model parameters, is proposed. The reduced scheme of model reaction of formaldehyde oxidation is received. Functional characterizes the measure of proximity to the calculated values for the initial kinetic reaction scheme and the scheme resulting disturbance of its parameters. The advantage of this technique is the ability to analyze complex kinetic schemes and reduction of kinetic models to a size suitable for practical use. The results of computational experiments under different reaction conditions can be included in the functional and thus to receive the reduce scheme, which is consistent the detailed scheme for the desired range of conditions. Sensitivity analysis of the functional model allows to identify those parameters, which provide the largest (or smallest) the contribution to the result of the process simulation. The mathematical model can contain parameters, which change of values do not affect the qualitative and quantitative description of the process. The contribution of these parameters in the functional value won’t be of great importance. Thus it can be eliminated from consideration, which do not serve for modeling kinetic curves substances. The kinetic scheme of formaldehyde oxidation, the detailed mechanism which includes 25 stages and 15 substances, were investigated using this method. On the basis of the local and global sensitivity analysis, the most important stage of the process that affect the overall dynamics of the target concentrations of the reaction. The reduced scheme of model reaction of formaldehyde oxidation is received. This scheme also describes the behavior of the main substances, as detailed scheme, but has a much smaller number of reaction stages. The results of the comparative analysis of modeling of formaldehyde oxidation on detailed and reduced schemes are given. Computational aspects of the problems of chemical kinetics by Sobol’ global method an example of this reaction are specified. The comparison results are local, global and total sensitivity indices are given.
-
Numerical simulation of frequency dependence of dielectric permittivity and electrical conductivity of saturated porous media
Computer Research and Modeling, 2016, v. 8, no. 5, pp. 765-773Views (last year): 8.This article represents numerical simulation technique for determining effective spectral electromagnetic properties (effective electrical conductivity and relative dielectric permittivity) of saturated porous media. Information about these properties is vastly applied during the interpretation of petrophysical exploration data of boreholes and studying of rock core samples. The main feature of the present paper consists in the fact, that it involves three-dimensional saturated digital rock models, which were constructed based on the combined data considering microscopic structure of the porous media and the information about capillary equilibrium of oil-water mixture in pores. Data considering microscopic structure of the model are obtained by means of X-ray microscopic tomography. Information about distributions of saturating fluids is based on hydrodynamic simulations with density functional technique. In order to determine electromagnetic properties of the numerical model time-domain Fourier transform of Maxwell equations is considered. In low frequency approximation the problem can be reduced to solving elliptic equation for the distribution of complex electric potential. Finite difference approximation is based on discretization of the model with homogeneous isotropic orthogonal grid. This discretization implies that each computational cell contains exclusively one medium: water, oil or rock. In order to obtain suitable numerical model the distributions of saturating components is segmented. Such kind of modification enables avoiding usage of heterogeneous grids and disregards influence on the results of simulations of the additional techniques, required in order to determine properties of cells, filled with mixture of media. Corresponding system of differential equations is solved by means of biconjugate gradient stabilized method with multigrid preconditioner. Based on the results of complex electric potential computations average values of electrical conductivity and relative dielectric permittivity is calculated. For the sake of simplicity, this paper considers exclusively simulations with no spectral dependence of conductivities and permittivities of model components. The results of numerical simulations of spectral dependence of effective characteristics of heterogeneously saturated porous media (electrical conductivity and relative dielectric permittivity) in broad range of frequencies and multiple water saturations are represented in figures and table. Efficiency of the presented approach for determining spectral electrical properties of saturated rocks is discussed in conclusion.
-
Conversion of the initial indices of the technological process of the smelting of steel for the subsequent simulation
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 187-199Views (last year): 6. Citations: 1 (RSCI).Efficiency of production directly depends on quality of the management of technology which, in turn, relies on the accuracy and efficiency of the processing of control and measuring information. Development of the mathematical methods of research of the system communications and regularities of functioning and creation of the mathematical models taking into account structural features of object of researches, and also writing of the software products for realization of these methods are an actual task. Practice has shown that the list of parameters that take place in the study of complex object of modern production, ranging from a few dozen to several hundred names, and the degree of influence of each factor in the initial time is not clear. Before working for the direct determination of the model in these circumstances, it is impossible — the amount of the required information may be too great, and most of the work on the collection of this information will be done in vain due to the fact that the degree of influence on the optimization of most factors of the original list would be negligible. Therefore, a necessary step in determining a model of a complex object is to work to reduce the dimension of the factor space. Most industrial plants are hierarchical group processes and mass volume production, characterized by hundreds of factors. (For an example of realization of the mathematical methods and the approbation of the constructed models data of the Moldavian steel works were taken in a basis.) To investigate the systemic linkages and patterns of functioning of such complex objects are usually chosen several informative parameters, and carried out their sampling. In this article the sequence of coercion of the initial indices of the technological process of the smelting of steel to the look suitable for creation of a mathematical model for the purpose of prediction is described. The implementations of new types became also creation of a basis for development of the system of automated management of quality of the production. In the course of weak correlation the following stages are selected: collection and the analysis of the basic data, creation of the table the correlated of the parameters, abbreviation of factor space by means of the correlative pleiads and a method of weight factors. The received results allow to optimize process of creation of the model of multiple-factor process.
-
Four-factor computing experiment for the random walk on a two-dimensional square field
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 905-918Views (last year): 21.Nowadays the random search became a widespread and effective tool for solving different complex optimization and adaptation problems. In this work, the problem of an average duration of a random search for one object by another is regarded, depending on various factors on a square field. The problem solution was carried out by holding total experiment with 4 factors and orthogonal plan with 54 lines. Within each line, the initial conditions and the cellular automaton transition rules were simulated and the duration of the search for one object by another was measured. As a result, the regression model of average duration of a random search for an object depending on the four factors considered, specifying the initial positions of two objects, the conditions of their movement and detection is constructed. The most significant factors among the factors considered in the work that determine the average search time are determined. An interpretation is carried out in the problem of random search for an object from the constructed model. The important result of the work is that the qualitative and quantitative influence of initial positions of objects, the size of the lattice and the transition rules on the average duration of search is revealed by means of model obtained. It is shown that the initial neighborhood of objects on the lattice does not guarantee a quick search, if each of them moves. In addition, it is quantitatively estimated how many times the average time of searching for an object can increase or decrease with increasing the speed of the searching object by 1 unit, and also with increasing the field size by 1 unit, with different initial positions of the two objects. The exponential nature of the growth in the number of steps for searching for an object with an increase in the lattice size for other fixed factors is revealed. The conditions for the greatest increase in the average search duration are found: the maximum distance of objects in combination with the immobility of one of them when the field size is changed by 1 unit. (that is, for example, with $4 \times 4$ at $5 \times 5$) can increase the average search duration in $e^{1.69} \approx 5.42$. The task presented in the work may be relevant from the point of view of application both in the landmark for ensuring the security of the state, and, for example, in the theory of mass service.
-
Application of Turbulence Problem Solver (TPS) software complex for numerical modeling of the interaction between laser radiation and metals
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 619-630Views (last year): 15.The work is dedicated to the use of the software package Turbulence Problem Solver (TPS) for numerical simulation of a wide range of laser problems. The capabilities of the package are demonstrated by the example of numerical simulation of the interaction of femtosecond laser pulses with thin metal bonds. The software package TPS developed by the authors is intended for numerical solution of hyperbolic systems of differential equations on multiprocessor computing systems with distributed memory. The package is a modern and expandable software product. The architecture of the package gives the researcher the opportunity to model different physical processes in a uniform way, using different numerical methods and program blocks containing specific initial conditions, boundary conditions and source terms for each problem. The package provides the the opportunity to expand the functionality of the package by adding new classes of problems, computational methods, initial and boundary conditions, as well as equations of state of matter. The numerical methods implemented in the software package were tested on test problems in one-dimensional, two-dimensional and three-dimensional geometry, which included Riemann's problems on the decay of an arbitrary discontinuity with different configurations of the exact solution.
Thin films on substrates are an important class of targets for nanomodification of surfaces in plasmonics or sensor applications. Many articles are devoted to this subject. Most of them, however, focus on the dynamics of the film itself, paying little attention to the substrate, considering it simply as an object that absorbs the first compression wave and does not affect the surface structures that arise as a result of irradiation. The paper describes in detail a computational experiment on the numerical simulation of the interaction of a single ultrashort laser pulse with a gold film deposited on a thick glass substrate. The uniform rectangular grid and the first-order Godunov numerical method were used. The presented results of calculations allowed to confirm the theory of the shock-wave mechanism of holes formation in the metal under femtosecond laser action for the case of a thin gold film with a thickness of about 50 nm on a thick glass substrate.
-
Development of anisotropic nonlinear noise-reduction algorithm for computed tomography data with context dynamic threshold
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 233-248Views (last year): 21.The article deals with the development of the noise-reduction algorithm based on anisotropic nonlinear data filtering of computed tomography (CT). Analysis of domestic and foreign literature has shown that the most effective algorithms for noise reduction of CT data use complex methods for analyzing and processing data, such as bilateral, adaptive, three-dimensional and other types of filtrations. However, a combination of such techniques is rarely used in practice due to long processing time per slice. In this regard, it was decided to develop an efficient and fast algorithm for noise-reduction based on simplified bilateral filtration method with three-dimensional data accumulation. The algorithm was developed on C ++11 programming language in Microsoft Visual Studio 2015. The main difference of the developed noise reduction algorithm is the use an improved mathematical model of CT noise, based on the distribution of Poisson and Gauss from the logarithmic value, developed earlier by our team. This allows a more accurate determination of the noise level and, thus, the threshold of data processing. As the result of the noise reduction algorithm, processed CT data with lower noise level were obtained. Visual evaluation of the data showed the increased information content of the processed data, compared to original data, the clarity of the mapping of homogeneous regions, and a significant reduction in noise in processing areas. Assessing the numerical results of the algorithm showed a decrease in the standard deviation (SD) level by more than 6 times in the processed areas, and high rates of the determination coefficient showed that the data were not distorted and changed only due to the removal of noise. Usage of newly developed context dynamic threshold made it possible to decrease SD level on every area of data. The main difference of the developed threshold is its simplicity and speed, achieved by preliminary estimation of the data array and derivation of the threshold values that are put in correspondence with each pixel of the CT. The principle of its work is based on threshold criteria, which fits well both into the developed noise reduction algorithm based on anisotropic nonlinear filtration, and another algorithm of noise-reduction. The algorithm successfully functions as part of the MultiVox workstation and is being prepared for implementation in a single radiological network of the city of Moscow.
-
Hierarchical method for mathematical modeling of stochastic thermal processes in complex electronic systems
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 613-630Views (last year): 3.A hierarchical method of mathematical and computer modeling of interval-stochastic thermal processes in complex electronic systems for various purposes is developed. The developed concept of hierarchical structuring reflects both the constructive hierarchy of a complex electronic system and the hierarchy of mathematical models of heat exchange processes. Thermal processes that take into account various physical phenomena in complex electronic systems are described by systems of stochastic, unsteady, and nonlinear partial differential equations and, therefore, their computer simulation encounters considerable computational difficulties even with the use of supercomputers. The hierarchical method avoids these difficulties. The hierarchical structure of the electronic system design, in general, is characterized by five levels: Level 1 — the active elements of the ES (microcircuits, electro-radio-elements); Level 2 — electronic module; Level 3 — a panel that combines a variety of electronic modules; Level 4 — a block of panels; Level 5 — stand installed in a stationary or mobile room. The hierarchy of models and modeling of stochastic thermal processes is constructed in the reverse order of the hierarchical structure of the electronic system design, while the modeling of interval-stochastic thermal processes is carried out by obtaining equations for statistical measures. The hierarchical method developed in the article allows to take into account the principal features of thermal processes, such as the stochastic nature of thermal, electrical and design factors in the production, assembly and installation of electronic systems, stochastic scatter of operating conditions and the environment, non-linear temperature dependencies of heat exchange factors, unsteady nature of thermal processes. The equations obtained in the article for statistical measures of stochastic thermal processes are a system of 14 non-stationary nonlinear differential equations of the first order in ordinary derivatives, whose solution is easily implemented on modern computers by existing numerical methods. The results of applying the method for computer simulation of stochastic thermal processes in electron systems are considered. The hierarchical method is applied in practice for the thermal design of real electronic systems and the creation of modern competitive devices.
-
Physical research, numerical and analytical modeling of explosion phenomena. A review
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 505-546The review considers a wide range of phenomena and problems associated with the explosion. Detailed numerical studies revealed an interesting physical effect — the formation of discrete vortex structures directly behind the front of a shock wave propagating in dense layers of a heterogeneous atmosphere. The necessity of further investigation of such phenomena and the determination of the degree of their connection with the possible development of gas-dynamic instability is shown. The brief analysis of numerous works on the thermal explosion of meteoroids during their high-speed movement in the Earth’s atmosphere is given. Much attention is paid to the development of a numerical algorithm for calculating the simultaneous explosion of several fragments of meteoroids and the features of the development of such a gas-dynamic flow are analyzed. The work shows that earlier developed algorithms for calculating explosions can be successfully used to study explosive volcanic eruptions. The paper presents and discusses the results of such studies for both continental and underwater volcanoes with certain restrictions on the conditions of volcanic activity.
The mathematical analysis is performed and the results of analytical studies of a number of important physical phenomena characteristic of explosions of high specific energy in the ionosphere are presented. It is shown that the preliminary laboratory physical modeling of the main processes that determine these phenomena is of fundamental importance for the development of sufficiently complete and adequate theoretical and numerical models of such complex phenomena as powerful plasma disturbances in the ionosphere. Laser plasma is the closest object for such a simulation. The results of the corresponding theoretical and experimental studies are presented and their scientific and practical significance is shown. The brief review of recent years on the use of laser radiation for laboratory physical modeling of the effects of a nuclear explosion on asteroid materials is given.
As a result of the analysis performed in the review, it was possible to separate and preliminarily formulate some interesting and scientifically significant questions that must be investigated on the basis of the ideas already obtained. These are finely dispersed chemically active systems formed during the release of volcanoes; small-scale vortex structures; generation of spontaneous magnetic fields due to the development of instabilities and their role in the transformation of plasma energy during its expansion in the ionosphere. It is also important to study a possible laboratory physical simulation of the thermal explosion of bodies under the influence of highspeed plasma flow, which has only theoretical interpretations.
-
Neural network analysis of transportation flows of urban aglomeration using the data from public video cameras
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 305-318Correct modeling of complex dynamics of urban transportation flows requires the collection of large volumes of empirical data to specify types of the modes and their identification. At the same time, setting a large number of observation posts is expensive and technically not always feasible. All this results in insufficient factographic support for the traffic control systems as well as for urban planners with the obvious consequences for the quality of their decisions. As one of the means to provide large-scale data collection at least for the qualitative situation analysis, the wide-area video cameras are used in different situation centers. There they are analyzed by human operators who are responsible for observation and control. Some video cameras provided their videos for common access, which makes them a valuable resource for transportation studies. However, there are significant problems with getting qualitative data from such cameras, which relate to the theory and practice of image processing. This study is devoted to the practical application of certain mainstream neuro-networking technologies for the estimation of essential characteristics of actual transportation flows. The problems arising in processing these data are analyzed, and their solutions are suggested. The convolution neural networks are used for tracking, and the methods for obtaining basic parameters of transportation flows from these observations are studied. The simplified neural networks are used for the preparation of training sets for the deep learning neural network YOLOv4 which is later used for the estimation of speed and density of automobile flows.
-
Estimation of the probability of spontaneous synthesis of computational structures in relation to the implementation of parallel information processing
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 677-696We consider a model of spontaneous formation of a computational structure in the human brain for solving a given class of tasks in the process of performing a series of similar tasks. The model is based on a special definition of a numerical measure of the complexity of the solution algorithm. This measure has an informational property: the complexity of a computational structure consisting of two independent structures is equal to the sum of the complexities of these structures. Then the probability of spontaneous occurrence of the structure depends exponentially on the complexity of the structure. The exponential coefficient requires experimental determination for each type of problem. It may depend on the form of presentation of the source data and the procedure for issuing the result. This estimation method was applied to the results of a series of experiments that determined the strategy for solving a series of similar problems with a growing number of initial data. These experiments were described in previously published papers. Two main strategies were considered: sequential execution of the computational algorithm, or the use of parallel computing in those tasks where it is effective. These strategies differ in how calculations are performed. Using an estimate of the complexity of schemes, you can use the empirical probability of one of the strategies to calculate the probability of the other. The calculations performed showed a good match between the calculated and empirical probabilities. This confirms the hypothesis about the spontaneous formation of structures that solve the problem during the initial training of a person. The paper contains a brief description of experiments, detailed computational schemes and a strict definition of the complexity measure of computational structures and the conclusion of the dependence of the probability of structure formation on its complexity.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




