All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Verification of calculated characteristics of supersonic turbulent jets
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 21-35Views (last year): 43.Verification results of supersonic turbulent jets computational characteristics are presented. Numerical simulation of axisymmetric nozzle operating is realized using FlowVision CFD. Open test cases for CFD are used. The test cases include Seiner tests with exit Mach number of 2.0 both fully-expanded and under-expanded $(P/P_0 = 1.47)$. Fully-expanded nozzle investigated with wide range of flow temperature (300…3000 K). The considered studies include simulation downstream from the nozzle exit diameter. Next numerical investigation is presented at an exit Mach number of 2.02 and a free-stream Mach number of 2.2. Geometric model of convergent- divergent nozzle rebuilt from original Putnam experiment. This study is set with nozzle pressure ratio of 8.12 and total temperature of 317 K.
The paper provides a comparison of obtained FlowVision results with experimental data and another current CFD studies. A comparison of the calculated characteristics and experimental data indicates a good agreement. The best coincidence with Seiner's experimental velocity distribution (about 7 % at far field for the first case) obtained using two-equation $k–\varepsilon$ standard turbulence model with Wilcox compressibility correction. Predicted Mach number distribution at $Y/D = 1$ for Putnam nozzle presents accuracy of 3 %.
General guidelines for simulation of supersonic turbulent jets in the FlowVision software are formulated in the given paper. Grid convergence determined the optimal cell rate. In order to calculate the design regime, it is recommended to build a grid, containing not less than 40 cells from the axis of symmetry to the nozzle wall. In order to calculate an off-design regime, it is necessary to resolve the shock waves. For this purpose, not less than 80 cells is required in the radial direction. Investigation of the influence of turbulence model on the flow characteristics has shown that the version of the SST $k–\omega$ turbulence model implemented in the FlowVision software essentially underpredicts the axial velocity. The standard $k–\varepsilon$ model without compressibility correction also underpredicts the axial velocity. These calculations agree well with calculations in other CFD codes using the standard $k–\varepsilon$ model. The in-home $k–\varepsilon$ turbulence model KEFV with compressibility correction a little bit overpredicts the axial velocity. Since, the best results are obtained using the standard $k–\varepsilon$ model combined with the Wilcox compressibility correction, this model is recommended for the problems discussed.
The developed methodology can be regarded as a basis for numerical investigations of more complex nozzle flows.
-
Direct multiplicative methods for sparse matrices. Newton methods
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 679-703Views (last year): 7. Citations: 1 (RSCI).We consider a numerically stable direct multiplicative algorithm of solving linear equations systems, which takes into account the sparseness of matrices presented in a packed form. The advantage of the algorithm is the ability to minimize the filling of the main rows of multipliers without losing the accuracy of the results. Moreover, changes in the position of the next processed row of the matrix are not made, what allows using static data storage formats. Linear system solving by a direct multiplicative algorithm is, like the solving with $LU$-decomposition, just another scheme of the Gaussian elimination method implementation.
In this paper, this algorithm is the basis for solving the following problems:
Problem 1. Setting the descent direction in Newtonian methods of unconditional optimization by integrating one of the known techniques of constructing an essentially positive definite matrix. This approach allows us to weaken or remove additional specific difficulties caused by the need to solve large equation systems with sparse matrices presented in a packed form.
Problem 2. Construction of a new mathematical formulation of the problem of quadratic programming and a new form of specifying necessary and sufficient optimality conditions. They are quite simple and can be used to construct mathematical programming methods, for example, to find the minimum of a quadratic function on a polyhedral set of constraints, based on solving linear equations systems, which dimension is not higher than the number of variables of the objective function.
Problem 3. Construction of a continuous analogue of the problem of minimizing a real quadratic polynomial in Boolean variables and a new form of defining necessary and sufficient conditions of optimality for the development of methods for solving them in polynomial time. As a result, the original problem is reduced to the problem of finding the minimum distance between the origin and the angular point of a convex polyhedron, which is a perturbation of the $n$-dimensional cube and is described by a system of double linear inequalities with an upper triangular matrix of coefficients with units on the main diagonal. Only two faces are subject to investigation, one of which or both contains the vertices closest to the origin. To calculate them, it is sufficient to solve $4n – 4$ linear equations systems and choose among them all the nearest equidistant vertices in polynomial time. The problem of minimizing a quadratic polynomial is $NP$-hard, since an $NP$-hard problem about a vertex covering for an arbitrary graph comes down to it. It follows therefrom that $P = NP$, which is based on the development beyond the limits of integer optimization methods.
-
Bayesian localization for autonomous vehicle using sensor fusion and traffic signs
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 295-303Views (last year): 22.The localization of a vehicle is an important task in the field of intelligent transportation systems. It is well known that sensor fusion helps to create more robust and accurate systems for autonomous vehicles. Standard approaches, like extended Kalman Filter or Particle Filter, are inefficient in case of highly non-linear data or have high computational cost, which complicates using them in embedded systems. Significant increase of precision, especially in case when GPS (Global Positioning System) is unavailable, may be achieved by using landmarks with known location — such as traffic signs, traffic lights, or SLAM (Simultaneous Localization and Mapping) features. However, this approach may be inapplicable if a priori locations are unknown or not accurate enough. We suggest a new approach for refining coordinates of a vehicle by using landmarks, such as traffic signs. Core part of the suggested system is the Bayesian framework, which refines vehicle location using external data about the previous traffic signs detections, collected with crowdsourcing. This paper presents an approach that combines trajectories built using global coordinates from GPS and relative coordinates from Inertial Measurement Unit (IMU) to produce a vehicle's trajectory in an unknown environment. In addition, we collected a new dataset, including from smartphone GPS and IMU sensors, video feed from windshield camera, which were recorded during 4 car rides on the same route. Also, we collected precise location data from Real Time Kinematic Global Navigation Satellite System (RTK-GNSS) device, which can be used for validation. This RTK-GNSS system was used to collect precise data about the traffic signs locations on the route as well. The results show that the Bayesian approach helps with the trajectory correction and gives better estimations with the increase of the amount of the prior information. The suggested method is efficient and requires, apart from the GPS/IMU measurements, only information about the vehicle locations during previous traffic signs detections.
-
Direct multiplicative methods for sparse matrices. Quadratic programming
Computer Research and Modeling, 2018, v. 10, no. 4, pp. 407-420Views (last year): 32.A numerically stable direct multiplicative method for solving systems of linear equations that takes into account the sparseness of matrices presented in a packed form is considered. The advantage of the method is the calculation of the Cholesky factors for a positive definite matrix of the system of equations and its solution within the framework of one procedure. And also in the possibility of minimizing the filling of the main rows of multipliers without losing the accuracy of the results, and no changes are made to the position of the next processed row of the matrix, which allows using static data storage formats. The solution of the system of linear equations by a direct multiplicative algorithm is, like the solution with LU-decomposition, just another scheme for implementing the Gaussian elimination method.
The calculation of the Cholesky factors for a positive definite matrix of the system and its solution underlies the construction of a new mathematical formulation of the unconditional problem of quadratic programming and a new form of specifying necessary and sufficient conditions for optimality that are quite simple and are used in this paper to construct a new mathematical formulation for the problem of quadratic programming on a polyhedral set of constraints, which is the problem of finding the minimum distance between the origin ordinate and polyhedral boundary by means of a set of constraints and linear algebra dimensional geometry.
To determine the distance, it is proposed to apply the known exact method based on solving systems of linear equations whose dimension is not higher than the number of variables of the objective function. The distances are determined by the construction of perpendiculars to the faces of a polyhedron of different dimensions. To reduce the number of faces examined, the proposed method involves a special order of sorting the faces. Only the faces containing the vertex closest to the point of the unconditional extremum and visible from this point are subject to investigation. In the case of the presence of several nearest equidistant vertices, we investigate a face containing all these vertices and faces of smaller dimension that have at least two common nearest vertices with the first face.
-
Simulation of flight and destruction of the Benešov bolid
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 605-618Views (last year): 24. Citations: 1 (RSCI).Comets and asteroids are recognized by the scientists and the governments of all countries in the world to be one of the most significant threats to the development and even the existence of our civilization. Preventing this threat includes studying the motion of large meteors through the atmosphere that is accompanied by various physical and chemical phenomena. Of particular interest to such studies are the meteors whose trajectories have been recorded and whose fragments have been found on Earth. Here, we study one of such cases. We develop a model for the motion and destruction of natural bodies in the Earth’s atmosphere, focusing on the Benešov bolid (EN070591), a bright meteor registered in 1991 in the Czech Republic by the European Observation System. Unique data, that includes the radiation spectra, is available for this bolid. We simulate the aeroballistics of the Benešov meteoroid and of its fragments, taking into account destruction due to thermal and mechanical processes. We compute the velocity of the meteoroid and its mass ablation using the equations of the classical theory of meteor motion, taking into account the variability of the mass ablation along the trajectory. The fragmentation of the meteoroid is considered using the model of sequential splitting and the statistical stress theory, that takes into account the dependency of the mechanical strength on the length scale. We compute air flows around a system of bodies (shards of the meteoroid) in the regime where mutual interplay between them is essential. To that end, we develop a method of simulating air flows based on a set of grids that allows us to consider fragments of various shapes, sizes, and masses, as well as arbitrary positions of the fragments relative to each other. Due to inaccuracies in the early simulations of the motion of this bolid, its fragments could not be located for about 23 years. Later and more accurate simulations have allowed researchers to locate four of its fragments rather far from the location expected earlier. Our simulations of the motion and destruction of the Benešov bolid show that its interaction with the atmosphere is affected by multiple factors, such as the mass and the mechanical strength of the bolid, the parameters of its motion, the mechanisms of destruction, and the interplay between its fragments.
-
A problem-modeling environment for the numerical solution of the Boltzmann equation on a cluster architecture for analyzing gas-kinetic processes in the interelectrode gap of thermal emission converters
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 219-232Views (last year): 24.This paper is devoted to the application of the method of numerical solution of the Boltzmann equation for the solution of the problem of modeling the behavior of radionuclides in the cavity of the interelectric gap of a multielement electrogenerating channel. The analysis of gas-kinetic processes of thermionic converters is important for proving the design of the power-generating channel. The paper reviews two constructive schemes of the channel: with one- and two-way withdrawal of gaseous fission products into a vacuum-cesium system. The analysis uses a two-dimensional transport equation of the second-order accuracy for the solution of the left-hand side and the projection method for solving the right-hand side — the collision integral. In the course of the work, a software package was implemented that makes it possible to calculate on the cluster architecture by using the algorithm of parallelizing the left-hand side of the equation; the paper contains the results of the analysis of the dependence of the calculation efficiency on the number of parallel nodes. The paper contains calculations of data on the distribution of pressures of gaseous fission products in the gap cavity, calculations use various sets of initial pressures and flows; the dependency of the radionuclide pressure in the collector region was determined as a function of cesium pressures at the ends of the gap. The tests in the loop channel of a nuclear reactor confirm the obtained results.
-
The purposeful transformation of mathematical models based on strategic reflection
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 815-831The study of complex processes in various spheres of human activity is traditionally based on the use of mathematical models. In modern conditions, the development and application of such models is greatly simplified by the presence of high-speed computer equipment and specialized tools that allow, in fact, designing models from pre-prepared modules. Despite this, the known problems associated with ensuring the adequacy of the model, the reliability of the original data, the implementation in practice of the simulation results, the excessively large dimension of the original data, the joint application of sufficiency heterogeneous mathematical models in terms of complexity and integration of the simulated processes are becoming increasingly important. The more critical may be the external constraints imposed on the value of the optimized functional, and often unattainable within the framework of the constructed model. It is logical to assume that in order to fulfill these restrictions, a purposeful transformation of the original model is necessary, that is, the transition to a mathematical model with a deliberately improved solution. The new model will obviously have a different internal structure (a set of parameters and their interrelations), as well as other formats (areas of definition) of the source data. The possibilities of purposeful change of the initial model investigated by the authors are based on the realization of the idea of strategic reflection. The most difficult in mathematical terms practical implementation of the author's idea is the use of simulation models, for which the algorithms for finding optimal solutions have known limitations, and the study of sensitivity in most cases is very difficult. On the example of consideration of rather standard discrete- event simulation model the article presents typical methodological techniques that allow ranking variable parameters by sensitivity and, in the future, to expand the scope of definition of variable parameter to which the simulation model is most sensitive. In the transition to the “improved” model, it is also possible to simultaneously exclude parameters from it, the influence of which on the optimized functional is insignificant, and vice versa — the introduction of new parameters corresponding to real processes into the model.
-
Methodical questions of numerical simulation of external flows on locally-adaptive grids using wall functions
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1269-1290The work is dedicated to investigation of possibility to increase the efficiency of solving external aerodynamic problems. Methodical questions of using locally-adaptive grids and wall functions for numerical simulation of turbulent flows past flying vehicles are studied. Reynolds-averaged Navier–Stokes equations are integrated. The equations are closed by standard $k–\varepsilon$ turbulence model. Subsonic turbulent flow of perfect compressible viscous gas past airfoil RAE 2822 is considered. Calculations are performed in CFD software FlowVision. The efficiency of using the technology of smoothing diffusion fluxes and the Bradshaw formula for turbulent viscosity is analyzed. These techniques are regarded as means of increasing the accuracy of solving aerodynamic problems on locally-adaptive grids. The obtained results show that using the technology of smoothing diffusion fluxes essentially decreases the discrepancy between computed and experimental values of the drag coefficient. In addition, the distribution of the skin friction coefficient over the curvilinear surface of the airfoil becomes more regular. These results indicate that the given technology is an effective way to increase the accuracy of calculations on locally-adaptive grids. The Bradshaw formula for the dynamic coefficient of turbulent viscosity is traditionally used in the SST $k–\omega$ turbulence model. The possibility to implement it in the standard $k–\varepsilon$ turbulence model is investigated in the present article. The calculations show that this formula provides good agreement of integral aerodynamic characteristics and the distribution of the pressure coefficient over the airfoil surface with experimental data. Besides that, it essentially augments the accuracy of simulation of the flow in the boundary layer and in the wake. On the other hand, using the Bradshaw formula in the simulation of the air flow past airfoil RAE 2822 leads to under-prediction of the skin friction coefficient. For this reason, the conclusion is made that practical use of the Bradshaw formula requires its preliminary validation and calibration on reliable experimental data available for the considered flows. The results of the work as a whole show that using the technologies discussed in numerical solution of external aerodynamic problems on locally-adaptive grids together with wall functions provides the computational accuracy acceptable for quick assessment of the aerodynamic characteristics of a flying vehicle. So, one can deduce that the FlowVision software is an effective tool for preliminary design studies, for conceptual design, and for aerodynamic shape optimization.
-
Method of forming multiprogram control of an isolated intersection
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 295-303The simplest and most desirable method of traffic signal control is precalculated regulation, when the parameters of the traffic light object operation are calculated in advance and activated in accordance to a schedule. This work proposes a method of forming a signal plan that allows one to calculate the control programs and set the period of their activity. Preparation of initial data for the calculation includes the formation of a time series of daily traffic intensity with an interval of 15 minutes. When carrying out field studies, it is possible that part of the traffic intensity measurements is missing. To fill up the missing traffic intensity measurements, the spline interpolation method is used. The next step of the method is to calculate the daily set of signal plans. The work presents the interdependencies, which allow one to calculate the optimal durations of the control cycle and the permitting phase movement and to set the period of their activity. The present movement control systems have a limit on the number of control programs. To reduce the signal plans' number and to determine their activity period, the clusterization using the $k$-means method in the transport phase space is introduced In the new daily signal plan, the duration of the phases is determined by the coordinates of the received cluster centers, and the activity periods are set by the elements included in the cluster. Testing on a numerical illustration showed that, when the number of clusters is 10, the deviation of the optimal phase duration from the cluster centers does not exceed 2 seconds. To evaluate the effectiveness of the developed methodology, a real intersection with traffic light regulation was considered as an example. Based on field studies of traffic patterns and traffic demand, a microscopic model for the SUMO (Simulation of Urban Mobility) program was developed. The efficiency assessment is based on the transport losses estimated by the time spent on movement. Simulation modeling of the multiprogram control of traffic lights showed a 20% reduction in the delay time at the traffic light object in comparison with the single-program control. The proposed method allows automation of the process of calculating daily signal plans and setting the time of their activity.
-
Cosmological models of the Universe without a Beginning and without a singularity
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 473-486A new type of cosmological models for the Universe that has no Beginning and evolves from the infinitely distant past is considered.
These models are alternative to the cosmological models based on the Big Bang theory according to which the Universe has a finite age and was formed from an initial singularity.
In our opinion, there are certain problems in the Big Bang theory that our cosmological models do not have.
In our cosmological models, the Universe evolves by compression from the infinitely distant past tending a finite minimum of distances between objects of the order of the Compton wavelength $\lambda_C$ of hadrons and the maximum density of matter corresponding to the hadron era of the Universe. Then it expands progressing through all the stages of evolution established by astronomical observations up to the era of inflation.
The material basis that sets the fundamental nature of the evolution of the Universe in the our cosmological models is a nonlinear Dirac spinor field $\psi(x^k)$ with nonlinearity in the Lagrangian of the field of type $\beta(\bar{\psi}\psi)^n$ ($\beta = const$, $n$ is a rational number), where $\psi(x^k)$ is the 4-component Dirac spinor, and $\psi$ is the conjugate spinor.
In addition to the spinor field $\psi$ in cosmological models, we have other components of matter in the form of an ideal liquid with the equation of state $p = w\varepsilon$ $(w = const)$ at different values of the coefficient $w (−1 < w < 1)$. Additional components affect the evolution of the Universe and all stages of evolution occur in accordance with established observation data. Here $p$ is the pressure, $\varepsilon = \rho c^2$ is the energy density, $\rho$ is the mass density, and $c$ is the speed of light in a vacuum.
We have shown that cosmological models with a nonlinear spinor field with a nonlinearity coefficient $n = 2$ are the closest to reality.
In this case, the nonlinear spinor field is described by the Dirac equation with cubic nonlinearity.
But this is the Ivanenko–Heisenberg nonlinear spinor equation which W.Heisenberg used to construct a unified spinor theory of matter.
It is an amazing coincidence that the same nonlinear spinor equation can be the basis for constructing a theory of two different fundamental objects of nature — the evolving Universe and physical matter.
The developments of the cosmological models are supplemented by their computer researches the results of which are presented graphically in the work.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"