All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Weighthed vector finite element method and its applications
Computer Research and Modeling, 2019, v. 11, no. 1, pp. 71-86Views (last year): 37.Mathematical models of many natural processes are described by partial differential equations with singular solutions. Classical numerical methods for determination of approximate solution to such problems are inefficient. In the present paper a boundary value problem for vector wave equation in L-shaped domain is considered. The presence of reentrant corner of size $3\pi/2$ on the boundary of computational domain leads to the strong singularity of the solution, i.e. it does not belong to the Sobolev space $H^1$ so classical and special numerical methods have a convergence rate less than $O(h)$. Therefore in the present paper a special weighted set of vector-functions is introduced. In this set the solution of considered boundary value problem is defined as $R_ν$-generalized one.
For numerical determination of the $R_ν$-generalized solution a weighted vector finite element method is constructed. The basic difference of this method is that the basis functions contain as a factor a special weight function in a degree depending on the properties of the solution of initial problem. This allows to significantly raise a convergence speed of approximate solution to the exact one when the mesh is refined. Moreover, introduced basis functions are solenoidal, therefore the solenoidal condition for the solution is taken into account precisely, so the spurious numerical solutions are prevented.
Results of numerical experiments are presented for series of different type model problems: some of them have a solution containing only singular component and some of them have a solution containing a singular and regular components. Results of numerical experiment showed that when a finite element mesh is refined a convergence rate of the constructed weighted vector finite element method is $O(h)$, that is more than one and a half times better in comparison with special methods developed for described problem, namely singular complement method and regularization method. Another features of constructed method are algorithmic simplicity and naturalness of the solution determination that is beneficial for numerical computations.
-
Development of anisotropic nonlinear noise-reduction algorithm for computed tomography data with context dynamic threshold
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 233-248Views (last year): 21.The article deals with the development of the noise-reduction algorithm based on anisotropic nonlinear data filtering of computed tomography (CT). Analysis of domestic and foreign literature has shown that the most effective algorithms for noise reduction of CT data use complex methods for analyzing and processing data, such as bilateral, adaptive, three-dimensional and other types of filtrations. However, a combination of such techniques is rarely used in practice due to long processing time per slice. In this regard, it was decided to develop an efficient and fast algorithm for noise-reduction based on simplified bilateral filtration method with three-dimensional data accumulation. The algorithm was developed on C ++11 programming language in Microsoft Visual Studio 2015. The main difference of the developed noise reduction algorithm is the use an improved mathematical model of CT noise, based on the distribution of Poisson and Gauss from the logarithmic value, developed earlier by our team. This allows a more accurate determination of the noise level and, thus, the threshold of data processing. As the result of the noise reduction algorithm, processed CT data with lower noise level were obtained. Visual evaluation of the data showed the increased information content of the processed data, compared to original data, the clarity of the mapping of homogeneous regions, and a significant reduction in noise in processing areas. Assessing the numerical results of the algorithm showed a decrease in the standard deviation (SD) level by more than 6 times in the processed areas, and high rates of the determination coefficient showed that the data were not distorted and changed only due to the removal of noise. Usage of newly developed context dynamic threshold made it possible to decrease SD level on every area of data. The main difference of the developed threshold is its simplicity and speed, achieved by preliminary estimation of the data array and derivation of the threshold values that are put in correspondence with each pixel of the CT. The principle of its work is based on threshold criteria, which fits well both into the developed noise reduction algorithm based on anisotropic nonlinear filtration, and another algorithm of noise-reduction. The algorithm successfully functions as part of the MultiVox workstation and is being prepared for implementation in a single radiological network of the city of Moscow.
-
Investigation of Turing structures formation under the influence of wave instability
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 397-412Views (last year): 21.A classical for nonlinear dynamics model, Brusselator, is considered, being augmented by addition of a third variable, which plays the role of a fast-diffusing inhibitor. The model is investigated in one-dimensional case in the parametric domain, where two types of diffusive instabilities of system’s homogeneous stationary state are manifested: wave instability, which leads to spontaneous formation of autowaves, and Turing instability, which leads to spontaneous formation of stationary dissipative structures, or Turing structures. It is shown that, due to the subcritical nature of Turing bifurcation, the interaction of two instabilities in this system results in spontaneous formation of stationary dissipative structures already before the passage of Turing bifurcation. In response to different perturbations of spatially uniform stationary state, different stable regimes are manifested in the vicinity of the double bifurcation point in the parametric region under study: both pure regimes, which consist of either stationary or autowave dissipative structures; and mixed regimes, in which different modes dominate in different areas of the computational space. In the considered region of the parametric space, the system is multistable and exhibits high sensitivity to initial noise conditions, which leads to blurring of the boundaries between qualitatively different regimes in the parametric region. At that, even in the area of dominance of mixed modes with prevalence of Turing structures, the establishment of a pure autowave regime has significant probability. In the case of stable mixed regimes, a sufficiently strong local perturbation in the area of the computational space, where autowave mode is manifested, can initiate local formation of new stationary dissipative structures. Local perturbation of the stationary homogeneous state in the parametric region under investidation leads to a qualitatively similar map of established modes, the zone of dominance of pure autowave regimes being expanded with the increase of local perturbation amplitude. In two-dimensional case, mixed regimes turn out to be only transient — upon the appearance of localized Turing structures under the influence of wave regime, they eventually occupy all available space.
-
Hierarchical method for mathematical modeling of stochastic thermal processes in complex electronic systems
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 613-630Views (last year): 3.A hierarchical method of mathematical and computer modeling of interval-stochastic thermal processes in complex electronic systems for various purposes is developed. The developed concept of hierarchical structuring reflects both the constructive hierarchy of a complex electronic system and the hierarchy of mathematical models of heat exchange processes. Thermal processes that take into account various physical phenomena in complex electronic systems are described by systems of stochastic, unsteady, and nonlinear partial differential equations and, therefore, their computer simulation encounters considerable computational difficulties even with the use of supercomputers. The hierarchical method avoids these difficulties. The hierarchical structure of the electronic system design, in general, is characterized by five levels: Level 1 — the active elements of the ES (microcircuits, electro-radio-elements); Level 2 — electronic module; Level 3 — a panel that combines a variety of electronic modules; Level 4 — a block of panels; Level 5 — stand installed in a stationary or mobile room. The hierarchy of models and modeling of stochastic thermal processes is constructed in the reverse order of the hierarchical structure of the electronic system design, while the modeling of interval-stochastic thermal processes is carried out by obtaining equations for statistical measures. The hierarchical method developed in the article allows to take into account the principal features of thermal processes, such as the stochastic nature of thermal, electrical and design factors in the production, assembly and installation of electronic systems, stochastic scatter of operating conditions and the environment, non-linear temperature dependencies of heat exchange factors, unsteady nature of thermal processes. The equations obtained in the article for statistical measures of stochastic thermal processes are a system of 14 non-stationary nonlinear differential equations of the first order in ordinary derivatives, whose solution is easily implemented on modern computers by existing numerical methods. The results of applying the method for computer simulation of stochastic thermal processes in electron systems are considered. The hierarchical method is applied in practice for the thermal design of real electronic systems and the creation of modern competitive devices.
-
Difference scheme for solving problems of hydrodynamics for large grid Peclet numbers
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 833-848The paper discusses the development and application of the accounting rectangular cell fullness method with material substance, in particular, a liquid, to increase the smoothness and accuracy of a finite-difference solution of hydrodynamic problems with a complex shape of the boundary surface. Two problems of computational hydrodynamics are considered to study the possibilities of the proposed difference schemes: the spatial-twodimensional flow of a viscous fluid between two coaxial semi-cylinders and the transfer of substances between coaxial semi-cylinders. Discretization of diffusion and convection operators was performed on the basis of the integro-interpolation method, taking into account taking into account the fullness of cells and without it. It is proposed to use a difference scheme, for solving the problem of diffusion – convection at large grid Peclet numbers, that takes into account the cell population function, and a scheme on the basis of linear combination of the Upwind and Standard Leapfrog difference schemes with weight coefficients obtained by minimizing the approximation error at small Courant numbers. As a reference, an analytical solution describing the Couette – Taylor flow is used to estimate the accuracy of the numerical solution. The relative error of calculations reaches 70% in the case of the direct use of rectangular grids (stepwise approximation of the boundaries), under the same conditions using the proposed method allows to reduce the error to 6%. It is shown that the fragmentation of a rectangular grid by 2–8 times in each of the spatial directions does not lead to the same increase in the accuracy that numerical solutions have, obtained taking into account the fullness of the cells. The proposed difference schemes on the basis of linear combination of the Upwind and Standard Leapfrog difference schemes with weighting factors of 2/3 and 1/3, respectively, obtained by minimizing the order of approximation error, for the diffusion – convection problem have a lower grid viscosity and, as a corollary, more precisely, describe the behavior of the solution in the case of large grid Peclet numbers.
-
Neural network analysis of transportation flows of urban aglomeration using the data from public video cameras
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 305-318Correct modeling of complex dynamics of urban transportation flows requires the collection of large volumes of empirical data to specify types of the modes and their identification. At the same time, setting a large number of observation posts is expensive and technically not always feasible. All this results in insufficient factographic support for the traffic control systems as well as for urban planners with the obvious consequences for the quality of their decisions. As one of the means to provide large-scale data collection at least for the qualitative situation analysis, the wide-area video cameras are used in different situation centers. There they are analyzed by human operators who are responsible for observation and control. Some video cameras provided their videos for common access, which makes them a valuable resource for transportation studies. However, there are significant problems with getting qualitative data from such cameras, which relate to the theory and practice of image processing. This study is devoted to the practical application of certain mainstream neuro-networking technologies for the estimation of essential characteristics of actual transportation flows. The problems arising in processing these data are analyzed, and their solutions are suggested. The convolution neural networks are used for tracking, and the methods for obtaining basic parameters of transportation flows from these observations are studied. The simplified neural networks are used for the preparation of training sets for the deep learning neural network YOLOv4 which is later used for the estimation of speed and density of automobile flows.
-
On the stability of the gravitational system of many bodies
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 487-511In this paper, a gravitational system is understood as a set of point bodies that interact according to Newton's law of attraction and have a negative value of the total energy. The question of the stability (nonstability) of a gravitational system of general position is discussed by direct computational experiment. A gravitational system of general position is a system in which the masses, initial positions, and velocities of bodies are chosen randomly from given ranges. A new method for the numerical solution of ordinary differential equations at large time intervals has been developed for the computational experiment. The proposed method allowed, on the one hand, to ensure the fulfillment of all conservation laws by a suitable correction of solutions, on the other hand, to use standard methods for the numerical solution of systems of differential equations of low approximation order. Within the framework of this method, the trajectory of a gravitational system in phase space is assembled from parts, the duration of each of which can be macroscopic. The constructed trajectory, generally speaking, is discontinuous, and the points of joining of individual pieces of the trajectory act as branch points. In connection with the latter circumstance, the proposed method, in part, can be attributed to the class of Monte Carlo methods. The general conclusion of a series of computational experiments has shown that gravitational systems of general position with a number of bodies of 3 or more, generally speaking, are unstable. In the framework of the proposed method, special cases of zero-equal angular momentum of a gravitational system with a number of bodies of 3 or more, as well as the problem of motion of two bodies, are specially considered. The case of numerical modeling of the dynamics of the solar system in time is considered separately. From the standpoint of computational experiments based on analytical methods, as well as direct numerical methods of high-order approximation (10 and higher), the stability of the solar system was previously demonstrated at an interval of five billion years or more. Due to the limitations on the available computational resources, the stability of the dynamics of the planets of the solar system within the framework of the proposed method was confirmed for a period of ten million years. With the help of a computational experiment, one of the possible scenarios for the disintegration of the solar systems is also considered.
-
Estimation of the probability of spontaneous synthesis of computational structures in relation to the implementation of parallel information processing
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 677-696We consider a model of spontaneous formation of a computational structure in the human brain for solving a given class of tasks in the process of performing a series of similar tasks. The model is based on a special definition of a numerical measure of the complexity of the solution algorithm. This measure has an informational property: the complexity of a computational structure consisting of two independent structures is equal to the sum of the complexities of these structures. Then the probability of spontaneous occurrence of the structure depends exponentially on the complexity of the structure. The exponential coefficient requires experimental determination for each type of problem. It may depend on the form of presentation of the source data and the procedure for issuing the result. This estimation method was applied to the results of a series of experiments that determined the strategy for solving a series of similar problems with a growing number of initial data. These experiments were described in previously published papers. Two main strategies were considered: sequential execution of the computational algorithm, or the use of parallel computing in those tasks where it is effective. These strategies differ in how calculations are performed. Using an estimate of the complexity of schemes, you can use the empirical probability of one of the strategies to calculate the probability of the other. The calculations performed showed a good match between the calculated and empirical probabilities. This confirms the hypothesis about the spontaneous formation of structures that solve the problem during the initial training of a person. The paper contains a brief description of experiments, detailed computational schemes and a strict definition of the complexity measure of computational structures and the conclusion of the dependence of the probability of structure formation on its complexity.
-
Analysis of mechanical structures of complex technical systems
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 903-916The work is devoted to the structural analysis of complex technical systems. Mechanical structures are considered, the properties of which affect the behavior of products during assembly, repair and operation. The main source of data on parts and mechanical connections between them is a hypergraph. This model formalizes the multidimensional basing relation. The hypergraph correctly describes the connectivity and mutual coordination of parts, which is achieved during the assembly of the product. When developing complex products in CAD systems, an engineer often makes serious design mistakes: overbasing of parts and non-sequential assembly operations. Effective ways of identifying these structural defects have been proposed. It is shown that the property of independent assembly can be represented as a closure operator whose domain is the boolean of the set of product parts. The images of this operator are connected and coordinated subsets of parts that can be assembled independently. A lattice model is described, which is the state space of the product during assembly, disassembly and decomposition into assembly units. The lattice model serves as a source of various structural information about the project. Numerical estimates of the cardinality of the set of admissible alternatives in the problems of choosing an assembly sequence and decomposition into assembly units are proposed. For many technical operations (for example, control, testing, etc.), it is necessary to mount all the operand parts in one assembly unit. A simple formalization of the technical conditions requiring the inclusion (exclusion) of parts in the assembly unit (from the assembly unit) has been developed. A theorem that gives an mathematical description of product decomposition into assembly units in exact lattice terms is given. A method for numerical evaluation of the robustness of the mechanical structure of a complex technical system is proposed.
-
Comparison of the results of using various evolution algorithms to solve the problem of route optimization of unmanned vehicles
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 45-62In this paper, a comparative analysis of the exact and heuristic algorithms presented by the method of branches and boundaries, genetic and ant algorithms, respectively, is carried out to find the optimal solution to the traveling salesman problem using the example of a courier robot. The purpose of the work is to determine the running time, the length of the obtained route and the amount of memory required for the program to work, using the method of branches and boundaries and evolutionary heuristic algorithms. Also, the most appropriate of the listed methods for use in the specified conditions is determined. This article uses the materials of the conducted research, implemented in the format of a computer program, the program code for which is implemented in Python. In the course of the study, a number of criteria for the applicability of algorithms were selected (the time of the program, the length of the constructed route and the amount of memory necessary for the program to work), the results of the algorithms were obtained under specified conditions and conclusions were drawn about the degree of expediency of using one or another algorithm in various specified conditions of the courier robot. During the study, it turned out that for a small number of points $\leqslant10$, the method of branches and boundaries is the most preferable, since it finds the optimal solution faster. However, when calculating the route by this method, provided that the points increase by more than 10, the operating time increases exponentially. In this case, more effective results are obtained by a heuristic approach using a genetic and ant algorithm. At the same time, the ant algorithm is distinguished by solutions that are closest to the reference ones and with an increase of more than 16 points. Its relative disadvantage is the greatest resource intensity among the considered algorithms. The genetic algorithm gives similar results, but after increasing the points more than 16, the length of the found route increases relative to the reference one. The advantage of the genetic algorithm is its lower resource intensity compared to other algorithms.
The practical significance of this article lies in the potential possibility of using the results obtained for the optimal solution of logistics problems by an automated system in various fields: warehouse logistics, transport logistics, «last mile» logistics, etc.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"