Результаты поиска по 'data':
Найдено статей: 358
  1. Basalaev A.V., Kloss Y.Y., Lubimov D.U., Knyazev A.N., Shuvalov P.V., Sherbakov D.V., Nahapetyan A.V.
    A problem-modeling environment for the numerical solution of the Boltzmann equation on a cluster architecture for analyzing gas-kinetic processes in the interelectrode gap of thermal emission converters
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 219-232

    This paper is devoted to the application of the method of numerical solution of the Boltzmann equation for the solution of the problem of modeling the behavior of radionuclides in the cavity of the interelectric gap of a multielement electrogenerating channel. The analysis of gas-kinetic processes of thermionic converters is important for proving the design of the power-generating channel. The paper reviews two constructive schemes of the channel: with one- and two-way withdrawal of gaseous fission products into a vacuum-cesium system. The analysis uses a two-dimensional transport equation of the second-order accuracy for the solution of the left-hand side and the projection method for solving the right-hand side — the collision integral. In the course of the work, a software package was implemented that makes it possible to calculate on the cluster architecture by using the algorithm of parallelizing the left-hand side of the equation; the paper contains the results of the analysis of the dependence of the calculation efficiency on the number of parallel nodes. The paper contains calculations of data on the distribution of pressures of gaseous fission products in the gap cavity, calculations use various sets of initial pressures and flows; the dependency of the radionuclide pressure in the collector region was determined as a function of cesium pressures at the ends of the gap. The tests in the loop channel of a nuclear reactor confirm the obtained results.

    Views (last year): 24.
  2. Antipova S.A., Vorobiev A.A.
    The purposeful transformation of mathematical models based on strategic reflection
    Computer Research and Modeling, 2019, v. 11, no. 5, pp. 815-831

    The study of complex processes in various spheres of human activity is traditionally based on the use of mathematical models. In modern conditions, the development and application of such models is greatly simplified by the presence of high-speed computer equipment and specialized tools that allow, in fact, designing models from pre-prepared modules. Despite this, the known problems associated with ensuring the adequacy of the model, the reliability of the original data, the implementation in practice of the simulation results, the excessively large dimension of the original data, the joint application of sufficiency heterogeneous mathematical models in terms of complexity and integration of the simulated processes are becoming increasingly important. The more critical may be the external constraints imposed on the value of the optimized functional, and often unattainable within the framework of the constructed model. It is logical to assume that in order to fulfill these restrictions, a purposeful transformation of the original model is necessary, that is, the transition to a mathematical model with a deliberately improved solution. The new model will obviously have a different internal structure (a set of parameters and their interrelations), as well as other formats (areas of definition) of the source data. The possibilities of purposeful change of the initial model investigated by the authors are based on the realization of the idea of strategic reflection. The most difficult in mathematical terms practical implementation of the author's idea is the use of simulation models, for which the algorithms for finding optimal solutions have known limitations, and the study of sensitivity in most cases is very difficult. On the example of consideration of rather standard discrete- event simulation model the article presents typical methodological techniques that allow ranking variable parameters by sensitivity and, in the future, to expand the scope of definition of variable parameter to which the simulation model is most sensitive. In the transition to the “improved” model, it is also possible to simultaneously exclude parameters from it, the influence of which on the optimized functional is insignificant, and vice versa — the introduction of new parameters corresponding to real processes into the model.

  3. Mitin A.L., Kalashnikov S.V., Yankovskiy E.A., Aksenov A.A., Zhluktov S.V., Chernyshev S.A.
    Methodical questions of numerical simulation of external flows on locally-adaptive grids using wall functions
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1269-1290

    The work is dedicated to investigation of possibility to increase the efficiency of solving external aerodynamic problems. Methodical questions of using locally-adaptive grids and wall functions for numerical simulation of turbulent flows past flying vehicles are studied. Reynolds-averaged Navier–Stokes equations are integrated. The equations are closed by standard $k–\varepsilon$ turbulence model. Subsonic turbulent flow of perfect compressible viscous gas past airfoil RAE 2822 is considered. Calculations are performed in CFD software FlowVision. The efficiency of using the technology of smoothing diffusion fluxes and the Bradshaw formula for turbulent viscosity is analyzed. These techniques are regarded as means of increasing the accuracy of solving aerodynamic problems on locally-adaptive grids. The obtained results show that using the technology of smoothing diffusion fluxes essentially decreases the discrepancy between computed and experimental values of the drag coefficient. In addition, the distribution of the skin friction coefficient over the curvilinear surface of the airfoil becomes more regular. These results indicate that the given technology is an effective way to increase the accuracy of calculations on locally-adaptive grids. The Bradshaw formula for the dynamic coefficient of turbulent viscosity is traditionally used in the SST $k–\omega$ turbulence model. The possibility to implement it in the standard $k–\varepsilon$ turbulence model is investigated in the present article. The calculations show that this formula provides good agreement of integral aerodynamic characteristics and the distribution of the pressure coefficient over the airfoil surface with experimental data. Besides that, it essentially augments the accuracy of simulation of the flow in the boundary layer and in the wake. On the other hand, using the Bradshaw formula in the simulation of the air flow past airfoil RAE 2822 leads to under-prediction of the skin friction coefficient. For this reason, the conclusion is made that practical use of the Bradshaw formula requires its preliminary validation and calibration on reliable experimental data available for the considered flows. The results of the work as a whole show that using the technologies discussed in numerical solution of external aerodynamic problems on locally-adaptive grids together with wall functions provides the computational accuracy acceptable for quick assessment of the aerodynamic characteristics of a flying vehicle. So, one can deduce that the FlowVision software is an effective tool for preliminary design studies, for conceptual design, and for aerodynamic shape optimization.

  4. Vlasov A.A., Pilgeikina I.A., Skorikova I.A.
    Method of forming multiprogram control of an isolated intersection
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 295-303

    The simplest and most desirable method of traffic signal control is precalculated regulation, when the parameters of the traffic light object operation are calculated in advance and activated in accordance to a schedule. This work proposes a method of forming a signal plan that allows one to calculate the control programs and set the period of their activity. Preparation of initial data for the calculation includes the formation of a time series of daily traffic intensity with an interval of 15 minutes. When carrying out field studies, it is possible that part of the traffic intensity measurements is missing. To fill up the missing traffic intensity measurements, the spline interpolation method is used. The next step of the method is to calculate the daily set of signal plans. The work presents the interdependencies, which allow one to calculate the optimal durations of the control cycle and the permitting phase movement and to set the period of their activity. The present movement control systems have a limit on the number of control programs. To reduce the signal plans' number and to determine their activity period, the clusterization using the $k$-means method in the transport phase space is introduced In the new daily signal plan, the duration of the phases is determined by the coordinates of the received cluster centers, and the activity periods are set by the elements included in the cluster. Testing on a numerical illustration showed that, when the number of clusters is 10, the deviation of the optimal phase duration from the cluster centers does not exceed 2 seconds. To evaluate the effectiveness of the developed methodology, a real intersection with traffic light regulation was considered as an example. Based on field studies of traffic patterns and traffic demand, a microscopic model for the SUMO (Simulation of Urban Mobility) program was developed. The efficiency assessment is based on the transport losses estimated by the time spent on movement. Simulation modeling of the multiprogram control of traffic lights showed a 20% reduction in the delay time at the traffic light object in comparison with the single-program control. The proposed method allows automation of the process of calculating daily signal plans and setting the time of their activity.

  5. Krechet V.G., Oshurko V.B., Kisser A.E.
    Cosmological models of the Universe without a Beginning and without a singularity
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 473-486

    A new type of cosmological models for the Universe that has no Beginning and evolves from the infinitely distant past is considered.

    These models are alternative to the cosmological models based on the Big Bang theory according to which the Universe has a finite age and was formed from an initial singularity.

    In our opinion, there are certain problems in the Big Bang theory that our cosmological models do not have.

    In our cosmological models, the Universe evolves by compression from the infinitely distant past tending a finite minimum of distances between objects of the order of the Compton wavelength $\lambda_C$ of hadrons and the maximum density of matter corresponding to the hadron era of the Universe. Then it expands progressing through all the stages of evolution established by astronomical observations up to the era of inflation.

    The material basis that sets the fundamental nature of the evolution of the Universe in the our cosmological models is a nonlinear Dirac spinor field $\psi(x^k)$ with nonlinearity in the Lagrangian of the field of type $\beta(\bar{\psi}\psi)^n$ ($\beta = const$, $n$ is a rational number), where $\psi(x^k)$ is the 4-component Dirac spinor, and $\psi$ is the conjugate spinor.

    In addition to the spinor field $\psi$ in cosmological models, we have other components of matter in the form of an ideal liquid with the equation of state $p = w\varepsilon$ $(w = const)$ at different values of the coefficient $w (−1 < w < 1)$. Additional components affect the evolution of the Universe and all stages of evolution occur in accordance with established observation data. Here $p$ is the pressure, $\varepsilon = \rho c^2$ is the energy density, $\rho$ is the mass density, and $c$ is the speed of light in a vacuum.

    We have shown that cosmological models with a nonlinear spinor field with a nonlinearity coefficient $n = 2$ are the closest to reality.

    In this case, the nonlinear spinor field is described by the Dirac equation with cubic nonlinearity.

    But this is the Ivanenko–Heisenberg nonlinear spinor equation which W.Heisenberg used to construct a unified spinor theory of matter.

    It is an amazing coincidence that the same nonlinear spinor equation can be the basis for constructing a theory of two different fundamental objects of nature — the evolving Universe and physical matter.

    The developments of the cosmological models are supplemented by their computer researches the results of which are presented graphically in the work.

  6. Biliatdinov K.Z., Dosikov V.S., Meniailo V.V.
    Improvement of the paired comparison method for implementation in computer programs used in assessment of technical systems’ quality
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1125-1135

    The article describes an improved paired comparison method, which systematizes in tables the rules of logical conclusions and formulas of checking indices for comparison of technical systems. To achieve this goal, the authors formulate rational rules of logical conclusions in making a paired comparison of the systems. In addition, for the purpose of consistency check of the results of the assessment, the authors introduce parameters such as «the number of scores gained by one system» and «systems’ quality index»; moreover, they design corresponding calculation formulas. For the purposes of practical application of this method to design computer programs, the authors propose to use formalized variants of interconnected tables: a table for processing and systematization of expert information, a table of possible logical conclusions based on the results of comparison of a set number of technical systems and a table of check values in the paired comparison method used in quality assessment of a definite number of technical systems. These tables allow one to organize procedures of the information processing in a more rational way and to predominantly exclude the influence of mistakes on the results of quality assessment of technical systems at the stage of data input. The main positive effect from the implementation of the paired comparison method is observed in a considerable reduction of time and resources needed to organize experts work, process expert information, and to prepare and conduct distant interviews with experts (on the Internet or a local computer network of an organization). This effect is achieved by a rational use of input data of the quality of the systems to be assessed. The proposed method is applied to computer programs used in assessing the effectiveness and stability of large technical systems.

  7. Ahmed M., Hegazy M., Klimchik A.S., Boby R.A.
    Lidar and camera data fusion in self-driving cars
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1239-1253

    Sensor fusion is one of the important solutions for the perception problem in self-driving cars, where the main aim is to enhance the perception of the system without losing real-time performance. Therefore, it is a trade-off problem and its often observed that most models that have a high environment perception cannot perform in a real-time manner. Our article is concerned with camera and Lidar data fusion for better environment perception in self-driving cars, considering 3 main classes which are cars, cyclists and pedestrians. We fuse output from the 3D detector model that takes its input from Lidar as well as the output from the 2D detector that take its input from the camera, to give better perception output than any of them separately, ensuring that it is able to work in real-time. We addressed our problem using a 3D detector model (Complex-Yolov3) and a 2D detector model (Yolo-v3), wherein we applied the image-based fusion method that could make a fusion between Lidar and camera information with a fast and efficient late fusion technique that is discussed in detail in this article. We used the mean average precision (mAP) metric in order to evaluate our object detection model and to compare the proposed approach with them as well. At the end, we showed the results on the KITTI dataset as well as our real hardware setup, which consists of Lidar velodyne 16 and Leopard USB cameras. We used Python to develop our algorithm and then validated it on the KITTI dataset. We used ros2 along with C++ to verify the algorithm on our dataset obtained from our hardware configurations which proved that our proposed approach could give good results and work efficiently in practical situations in a real-time manner.

  8. Chukanov S.N.
    Comparison of complex dynamical systems based on topological data analysis
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 513-525

    The paper considers the possibility of comparing and classifying dynamical systems based on topological data analysis. Determining the measures of interaction between the channels of dynamic systems based on the HIIA (Hankel Interaction Index Array) and PM (Participation Matrix) methods allows you to build HIIA and PM graphs and their adjacency matrices. For any linear dynamic system, an approximating directed graph can be constructed, the vertices of which correspond to the components of the state vector of the dynamic system, and the arcs correspond to the measures of mutual influence of the components of the state vector. Building a measure of distance (proximity) between graphs of different dynamic systems is important, for example, for identifying normal operation or failures of a dynamic system or a control system. To compare and classify dynamic systems, weighted directed graphs corresponding to dynamic systems are preliminarily formed with edge weights corresponding to the measures of interaction between the channels of the dynamic system. Based on the HIIA and PM methods, matrices of measures of interaction between the channels of dynamic systems are determined. The paper gives examples of the formation of weighted directed graphs for various dynamic systems and estimation of the distance between these systems based on topological data analysis. An example of the formation of a weighted directed graph for a dynamic system corresponding to the control system for the components of the angular velocity vector of an aircraft, which is considered as a rigid body with principal moments of inertia, is given. The method of topological data analysis used in this work to estimate the distance between the structures of dynamic systems is based on the formation of persistent barcodes and persistent landscape functions. Methods for comparing dynamic systems based on topological data analysis can be used in the classification of dynamic systems and control systems. The use of traditional algebraic topology for the analysis of objects does not allow obtaining a sufficient amount of information due to a decrease in the data dimension (due to the loss of geometric information). Methods of topological data analysis provide a balance between reducing the data dimension and characterizing the internal structure of an object. In this paper, topological data analysis methods are used, based on the use of Vietoris-Rips and Dowker filtering to assign a geometric dimension to each topological feature. Persistent landscape functions are used to map the persistent diagrams of the method of topological data analysis into the Hilbert space and then quantify the comparison of dynamic systems. Based on the construction of persistent landscape functions, we propose a comparison of graphs of dynamical systems and finding distances between dynamical systems. For this purpose, weighted directed graphs corresponding to dynamical systems are preliminarily formed. Examples of finding the distance between objects (dynamic systems) are given.

  9. Aksenov A.A., Alexandrova N.A., Budnikov A.V., Zhestkov M.N., Sazonova M.L., Kochetkov M.A.
    Simulation of multi-temperature flows turbulent mixing in a T-junctions by the LES approach in FlowVision software package
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 827-843

    The paper presents the results of numerical simulation of different-temperature water flows turbulent mixing in a T-junctions in the FlowVision software package. The article describes in detail an experimental stand specially designed to obtain boundary conditions that are simple for most computational fluid dynamics software systems. Values of timeaveraged temperatures and velocities in the control sensors and planes were obtained according to the test results. The article presents the system of partial differential equations used in the calculation describing the process of heat and mass transfer in a liquid using the Smagorinsky turbulence model. Boundary conditions are specified that allow setting the random velocity pulsations at the entrance to the computational domain. Distributions of time-averaged water velocity and temperature in control sections and sensors are obtained. The simulation is performed on various computational grids, for which the axes of the global coordinate system coincide with the directions of hot and cold water flows. The possibility for FlowVision PC to construct a computational grid in the simulation process based on changes in flow parameters is shown. The influence of such an algorithm for constructing a computational grid on the results of calculations is estimated. The results of calculations on a diagonal grid using a beveled scheme are given (the direction of the coordinate lines does not coincide with the direction of the tee pipes). The high efficiency of the beveled scheme is shown when modeling flows whose general direction does not coincide with the faces of the calculated cells. A comparison of simulation results on various computational grids is carried out. The numerical results obtained in the FlowVision PC are compared with experimental data and calculations performed using other computing programs. The results of modeling turbulent mixing of water flow of different temperatures in the FlowVision PC are closer to experimental data in comparison with calculations in CFX ANSYS. It is shown that the application of the LES turbulence model on relatively small computational grids in the FlowVision PC allows obtaining results with an error within 5%.

  10. Yakovleva T.V.
    Statistical distribution of the quasi-harmonic signal’s phase: basics of theory and computer simulation
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 287-297

    The paper presents the results of the fundamental research directed on the theoretical study and computer simulation of peculiarities of the quasi-harmonic signal’s phase statistical distribution. The quasi-harmonic signal is known to be formed as a result of the Gaussian noise impact on the initially harmonic signal. By means of the mathematical analysis the formulas have been obtained in explicit form for the principle characteristics of this distribution, namely: for the cumulative distribution function, the probability density function, the likelihood function. As a result of the conducted computer simulation the dependencies of these functions on the phase distribution parameters have been analyzed. The paper elaborates the methods of estimating the phase distribution parameters which contain the information about the initial, undistorted signal. It has been substantiated that the task of estimating the initial value of the phase of quasi-harmonic signal can be efficiently solved by averaging the results of the sampled measurements. As for solving the task of estimating the second parameter of the phase distribution, namely — the parameter, determining the signal level respectively the noise level — a maximum likelihood technique is proposed to be applied. The graphical illustrations are presented that have been obtained by means of the computer simulation of the principle characteristics of the phase distribution under the study. The existence and uniqueness of the likelihood function’s maximum allow substantiating the possibility and the efficiency of solving the task of estimating signal’s level relative to noise level by means of the maximum likelihood technique. The elaborated method of estimating the un-noised signal’s level relative to noise, i. e. the parameter characterizing the signal’s intensity on the basis of measurements of the signal’s phase is an original and principally new technique which opens perspectives of usage of the phase measurements as a tool of the stochastic data analysis. The presented investigation is meaningful for solving the task of determining the phase and the signal’s level by means of the statistical processing of the sampled phase measurements. The proposed methods of the estimation of the phase distribution’s parameters can be used at solving various scientific and technological tasks, in particular, in such areas as radio-physics, optics, radiolocation, radio-navigation, metrology.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"