All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Weighthed vector finite element method and its applications
Computer Research and Modeling, 2019, v. 11, no. 1, pp. 71-86Views (last year): 37.Mathematical models of many natural processes are described by partial differential equations with singular solutions. Classical numerical methods for determination of approximate solution to such problems are inefficient. In the present paper a boundary value problem for vector wave equation in L-shaped domain is considered. The presence of reentrant corner of size $3\pi/2$ on the boundary of computational domain leads to the strong singularity of the solution, i.e. it does not belong to the Sobolev space $H^1$ so classical and special numerical methods have a convergence rate less than $O(h)$. Therefore in the present paper a special weighted set of vector-functions is introduced. In this set the solution of considered boundary value problem is defined as $R_ν$-generalized one.
For numerical determination of the $R_ν$-generalized solution a weighted vector finite element method is constructed. The basic difference of this method is that the basis functions contain as a factor a special weight function in a degree depending on the properties of the solution of initial problem. This allows to significantly raise a convergence speed of approximate solution to the exact one when the mesh is refined. Moreover, introduced basis functions are solenoidal, therefore the solenoidal condition for the solution is taken into account precisely, so the spurious numerical solutions are prevented.
Results of numerical experiments are presented for series of different type model problems: some of them have a solution containing only singular component and some of them have a solution containing a singular and regular components. Results of numerical experiment showed that when a finite element mesh is refined a convergence rate of the constructed weighted vector finite element method is $O(h)$, that is more than one and a half times better in comparison with special methods developed for described problem, namely singular complement method and regularization method. Another features of constructed method are algorithmic simplicity and naturalness of the solution determination that is beneficial for numerical computations.
-
Development of anisotropic nonlinear noise-reduction algorithm for computed tomography data with context dynamic threshold
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 233-248Views (last year): 21.The article deals with the development of the noise-reduction algorithm based on anisotropic nonlinear data filtering of computed tomography (CT). Analysis of domestic and foreign literature has shown that the most effective algorithms for noise reduction of CT data use complex methods for analyzing and processing data, such as bilateral, adaptive, three-dimensional and other types of filtrations. However, a combination of such techniques is rarely used in practice due to long processing time per slice. In this regard, it was decided to develop an efficient and fast algorithm for noise-reduction based on simplified bilateral filtration method with three-dimensional data accumulation. The algorithm was developed on C ++11 programming language in Microsoft Visual Studio 2015. The main difference of the developed noise reduction algorithm is the use an improved mathematical model of CT noise, based on the distribution of Poisson and Gauss from the logarithmic value, developed earlier by our team. This allows a more accurate determination of the noise level and, thus, the threshold of data processing. As the result of the noise reduction algorithm, processed CT data with lower noise level were obtained. Visual evaluation of the data showed the increased information content of the processed data, compared to original data, the clarity of the mapping of homogeneous regions, and a significant reduction in noise in processing areas. Assessing the numerical results of the algorithm showed a decrease in the standard deviation (SD) level by more than 6 times in the processed areas, and high rates of the determination coefficient showed that the data were not distorted and changed only due to the removal of noise. Usage of newly developed context dynamic threshold made it possible to decrease SD level on every area of data. The main difference of the developed threshold is its simplicity and speed, achieved by preliminary estimation of the data array and derivation of the threshold values that are put in correspondence with each pixel of the CT. The principle of its work is based on threshold criteria, which fits well both into the developed noise reduction algorithm based on anisotropic nonlinear filtration, and another algorithm of noise-reduction. The algorithm successfully functions as part of the MultiVox workstation and is being prepared for implementation in a single radiological network of the city of Moscow.
-
Hierarchical method for mathematical modeling of stochastic thermal processes in complex electronic systems
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 613-630Views (last year): 3.A hierarchical method of mathematical and computer modeling of interval-stochastic thermal processes in complex electronic systems for various purposes is developed. The developed concept of hierarchical structuring reflects both the constructive hierarchy of a complex electronic system and the hierarchy of mathematical models of heat exchange processes. Thermal processes that take into account various physical phenomena in complex electronic systems are described by systems of stochastic, unsteady, and nonlinear partial differential equations and, therefore, their computer simulation encounters considerable computational difficulties even with the use of supercomputers. The hierarchical method avoids these difficulties. The hierarchical structure of the electronic system design, in general, is characterized by five levels: Level 1 — the active elements of the ES (microcircuits, electro-radio-elements); Level 2 — electronic module; Level 3 — a panel that combines a variety of electronic modules; Level 4 — a block of panels; Level 5 — stand installed in a stationary or mobile room. The hierarchy of models and modeling of stochastic thermal processes is constructed in the reverse order of the hierarchical structure of the electronic system design, while the modeling of interval-stochastic thermal processes is carried out by obtaining equations for statistical measures. The hierarchical method developed in the article allows to take into account the principal features of thermal processes, such as the stochastic nature of thermal, electrical and design factors in the production, assembly and installation of electronic systems, stochastic scatter of operating conditions and the environment, non-linear temperature dependencies of heat exchange factors, unsteady nature of thermal processes. The equations obtained in the article for statistical measures of stochastic thermal processes are a system of 14 non-stationary nonlinear differential equations of the first order in ordinary derivatives, whose solution is easily implemented on modern computers by existing numerical methods. The results of applying the method for computer simulation of stochastic thermal processes in electron systems are considered. The hierarchical method is applied in practice for the thermal design of real electronic systems and the creation of modern competitive devices.
-
Application of the grid-characteristic method for mathematical modeling in dynamical problems of deformable solid mechanics
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1041-1048 -
A numerical method for solving two-dimensional convection equation based on the monotonized Z-scheme for Earth ionosphere simulation
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 43-58The purpose of the paper is a research of a 2nd order finite difference scheme based on the Z-scheme. This research is the numerical solution of several two-dimensional differential equations simulated the incompressible medium convection.
One of real tasks for similar equations solution is the numerical simulating of strongly non-stationary midscale processes in the Earth ionosphere. Because convection processes in ionospheric plasma are controlled by magnetic field, the plasma incompressibility condition is supposed across the magnetic field. For the same reason, there can be rather high velocities of heat and mass convection along the magnetic field.
Ionospheric simulation relevant task is the research of plasma instability of various scales which started in polar and equatorial regions first of all. At the same time the mid-scale irregularities having characteristic sizes 1–50 km create conditions for development of the small-scale instabilities. The last lead to the F-spread phenomenon which significantly influences the accuracy of positioning satellite systems work and also other space and ground-based radio-electronic systems.
The difference schemes used for simultaneous simulating of such multi-scale processes must to have high resolution. Besides, these difference schemes must to be high resolution on the one hand and monotonic on the other hand. The fact that instabilities strengthen errors of difference schemes, especially they strengthen errors of dispersion type is the reason of such contradictory requirements. The similar swing of errors usually results to nonphysical results at the numerical solution.
At the numerical solution of three-dimensional mathematical models of ionospheric plasma are used the following scheme of splitting on physical processes: the first step of splitting carries out convection along, the second step of splitting carries out convection across. The 2nd order finite difference scheme investigated in the paper solves approximately convection across equations. This scheme is constructed by a monotonized nonlinear procedure on base of the Z-scheme which is one of 2nd order schemes. At this monotonized procedure a nonlinear correction with so-called “oblique differences” is used. “Oblique differences” contain the grid nodes relating to different layers of time.
The researches were conducted for two cases. In the simulating field components of the convection vector had: 1) the constant sign; 2) the variable sign. Dissipative and dispersive characteristics of the scheme for different types of the limiting functions are in number received.
The results of the numerical experiments allow to draw the following conclusions.
1. For the discontinuous initial profile the best properties were shown by the SuperBee limiter.
2. For the continuous initial profile with the big spatial steps the SuperBee limiter is better, and at the small steps the Koren limiter is better.
3. For the smooth initial profile the best results were shown by the Koren limiter.
4. The smooth F limiter showed the results similar to Koren limiter.
5. Limiters of different type leave dispersive errors, at the same time dependences of dispersive errors on the scheme parameters have big variability and depend on the scheme parameters difficulty.
6. The monotony of the considered differential scheme is in number confirmed in all calculations. The property of variation non-increase for all specified functions limiters is in number confirmed for the onedimensional equation.
7. The constructed differential scheme at the steps on time which are not exceeding the Courant's step is monotonous and shows good exactness characteristics for different types solutions. At excess of the Courant's step the scheme remains steady, but becomes unsuitable for instability problems as monotony conditions not satisfied in this case.
-
Physical research, numerical and analytical modeling of explosion phenomena. A review
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 505-546The review considers a wide range of phenomena and problems associated with the explosion. Detailed numerical studies revealed an interesting physical effect — the formation of discrete vortex structures directly behind the front of a shock wave propagating in dense layers of a heterogeneous atmosphere. The necessity of further investigation of such phenomena and the determination of the degree of their connection with the possible development of gas-dynamic instability is shown. The brief analysis of numerous works on the thermal explosion of meteoroids during their high-speed movement in the Earth’s atmosphere is given. Much attention is paid to the development of a numerical algorithm for calculating the simultaneous explosion of several fragments of meteoroids and the features of the development of such a gas-dynamic flow are analyzed. The work shows that earlier developed algorithms for calculating explosions can be successfully used to study explosive volcanic eruptions. The paper presents and discusses the results of such studies for both continental and underwater volcanoes with certain restrictions on the conditions of volcanic activity.
The mathematical analysis is performed and the results of analytical studies of a number of important physical phenomena characteristic of explosions of high specific energy in the ionosphere are presented. It is shown that the preliminary laboratory physical modeling of the main processes that determine these phenomena is of fundamental importance for the development of sufficiently complete and adequate theoretical and numerical models of such complex phenomena as powerful plasma disturbances in the ionosphere. Laser plasma is the closest object for such a simulation. The results of the corresponding theoretical and experimental studies are presented and their scientific and practical significance is shown. The brief review of recent years on the use of laser radiation for laboratory physical modeling of the effects of a nuclear explosion on asteroid materials is given.
As a result of the analysis performed in the review, it was possible to separate and preliminarily formulate some interesting and scientifically significant questions that must be investigated on the basis of the ideas already obtained. These are finely dispersed chemically active systems formed during the release of volcanoes; small-scale vortex structures; generation of spontaneous magnetic fields due to the development of instabilities and their role in the transformation of plasma energy during its expansion in the ionosphere. It is also important to study a possible laboratory physical simulation of the thermal explosion of bodies under the influence of highspeed plasma flow, which has only theoretical interpretations.
-
Analysis of the basic equation of the physical and statistical approach within reliability theory of technical systems
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 721-735Verification of the physical-statistical approach within reliability theory for the simplest cases was carried out, which showed its validity. An analytical solution of the one-dimensional basic equation of the physicalstatistical approach is presented under the assumption of a stationary degradation rate. From a mathematical point of view this equation is the well-known continuity equation, where the role of density is played by the density distribution function of goods in its characteristics phase space, and the role of fluid velocity is played by intensity (rate) degradation processes. The latter connects the general formalism with the specifics of degradation mechanisms. The cases of coordinate constant, linear and quadratic degradation rates are analyzed using the characteristics method. In the first two cases, the results correspond to physical intuition. At a constant rate of degradation, the shape of the initial distribution is preserved, and the distribution itself moves equably from the zero. At a linear rate of degradation, the distribution either narrows down to a narrow peak (in the singular limit), or expands, with the maximum shifting to the periphery at an exponentially increasing rate. The distribution form is also saved up to the parameters. For the initial normal distribution, the coordinates of the largest value of the distribution maximum for its return motion are obtained analytically.
In the quadratic case, the formal solution demonstrates counterintuitive behavior. It consists in the fact that the solution is uniquely defined only on a part of an infinite half-plane, vanishes along with all derivatives on the boundary, and is ambiguous when crossing the boundary. If you continue it to another area in accordance with the analytical solution, it has a two-humped appearance, retains the amount of substance and, which is devoid of physical meaning, periodically over time. If you continue it with zero, then the conservativeness property is violated. The anomaly of the quadratic case is explained, though not strictly, by the analogy of the motion of a material point with an acceleration proportional to the square of velocity. Here we are dealing with a mathematical curiosity. Numerical calculations are given for all cases. Additionally, the entropy of the probability distribution and the reliability function are calculated, and their correlation is traced.
-
The method of numerical solution of the one stationary hydrodynamics problem in convective form in $L$-shaped domain
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1291-1306An essential class of problems describes physical processes occurring in non-convex domains containing a corner greater than 180 degrees on the boundary. The solution in a neighborhood of a corner is singular and its finding using classical approaches entails a loss of accuracy. In the paper, we consider stationary, linearized by Picard’s iterations, Navier – Stokes equations governing the flow of a incompressible viscous fluid in the convection form in $L$-shaped domain. An $R_\nu$-generalized solution of the problem in special sets of weighted spaces is defined. A special finite element method to find an approximate $R_\nu$-generalized solution is constructed. Firstly, functions of the finite element spaces satisfy the law of conservation of mass in the strong sense, i.e. at the grid nodes. For this purpose, Scott – Vogelius element pair is used. The fulfillment of the condition of mass conservation leads to the finding more accurate, from a physical point of view, solution. Secondly, basis functions of the finite element spaces are supplemented by weight functions. The degree of the weight function, as well as the parameter $\nu$ in the definition of an $R_\nu$-generalized solution, and a radius of a neighborhood of the singularity point are free parameters of the method. A specially selected combination of them leads to an increase almost twice in the order of convergence rate of an approximate solution to the exact one in relation to the classical approaches. The convergence rate reaches the first order by the grid step in the norms of Sobolev weight spaces. Thus, numerically shown that the convergence rate does not depend on the corner value.
-
Neural network analysis of transportation flows of urban aglomeration using the data from public video cameras
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 305-318Correct modeling of complex dynamics of urban transportation flows requires the collection of large volumes of empirical data to specify types of the modes and their identification. At the same time, setting a large number of observation posts is expensive and technically not always feasible. All this results in insufficient factographic support for the traffic control systems as well as for urban planners with the obvious consequences for the quality of their decisions. As one of the means to provide large-scale data collection at least for the qualitative situation analysis, the wide-area video cameras are used in different situation centers. There they are analyzed by human operators who are responsible for observation and control. Some video cameras provided their videos for common access, which makes them a valuable resource for transportation studies. However, there are significant problems with getting qualitative data from such cameras, which relate to the theory and practice of image processing. This study is devoted to the practical application of certain mainstream neuro-networking technologies for the estimation of essential characteristics of actual transportation flows. The problems arising in processing these data are analyzed, and their solutions are suggested. The convolution neural networks are used for tracking, and the methods for obtaining basic parameters of transportation flows from these observations are studied. The simplified neural networks are used for the preparation of training sets for the deep learning neural network YOLOv4 which is later used for the estimation of speed and density of automobile flows.
-
Estimation of the probability of spontaneous synthesis of computational structures in relation to the implementation of parallel information processing
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 677-696We consider a model of spontaneous formation of a computational structure in the human brain for solving a given class of tasks in the process of performing a series of similar tasks. The model is based on a special definition of a numerical measure of the complexity of the solution algorithm. This measure has an informational property: the complexity of a computational structure consisting of two independent structures is equal to the sum of the complexities of these structures. Then the probability of spontaneous occurrence of the structure depends exponentially on the complexity of the structure. The exponential coefficient requires experimental determination for each type of problem. It may depend on the form of presentation of the source data and the procedure for issuing the result. This estimation method was applied to the results of a series of experiments that determined the strategy for solving a series of similar problems with a growing number of initial data. These experiments were described in previously published papers. Two main strategies were considered: sequential execution of the computational algorithm, or the use of parallel computing in those tasks where it is effective. These strategies differ in how calculations are performed. Using an estimate of the complexity of schemes, you can use the empirical probability of one of the strategies to calculate the probability of the other. The calculations performed showed a good match between the calculated and empirical probabilities. This confirms the hypothesis about the spontaneous formation of structures that solve the problem during the initial training of a person. The paper contains a brief description of experiments, detailed computational schemes and a strict definition of the complexity measure of computational structures and the conclusion of the dependence of the probability of structure formation on its complexity.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




