Результаты поиска по 'data':
Найдено статей: 358
  1. Golomazov M.M.
    Simulation of asteroid braking in the Earth atmosphere
    Computer Research and Modeling, 2013, v. 5, no. 6, pp. 917-926

    This article is investigated phenomenon of asteroid braking in neighborhood Chelyabinsk. Simulation of trajectory and asteroid basic parameters is accomplished on the basis of not numerous fixed video film and measurements. Calculation of hypersonic flow around asteroid is carried out before and after asteroid collapse. Possible version of asteroids synchronous braking is discussed. Trajectory data and gas dynamic functions are presented as data for definition of asteroid collapse.

    Views (last year): 4. Citations: 2 (RSCI).
  2. Pechenyuk A.V.
    Benchmarking of CEA FlowVision in ship flow simulation
    Computer Research and Modeling, 2014, v. 6, no. 6, pp. 889-899

    In the field of naval architecture the most competent recommendations in verification and validation of the numerical methods were developed within an international workshop on the numerical prediction of ship viscous flow which is held every five years in Gothenburg (Sweden) and Tokyo (Japan) alternately. In the workshop “Gothenburg–2000” three modern hull forms with reliable experimental data were introduced as test cases. The most general case among them is a containership KCS, a ship of moderate specific speed and fullness. The paper focuses on a numerical research of KCS hull flow, which was made according to the formal procedures of the workshop with the help of CEA FlowVision. Findings were compared with experimental data and computational data of other key CEA.

    Views (last year): 1. Citations: 5 (RSCI).
  3. The paper provides a solution of a task of calculating the parameters of a Rician distributed signal on the basis of the maximum likelihood principle in limiting cases of large and small values of the signal-tonoise ratio. The analytical formulas are obtained for the solution of the maximum likelihood equations’ system for the required signal and noise parameters for both the one-parameter approximation, when only one parameter is being calculated on the assumption that the second one is known a-priori, and for the two-parameter task, when both parameters are a-priori unknown. The direct calculation of required signal and noise parameters by formulas allows escaping the necessity of time resource consuming numerical solving the nonlinear equations’ s system and thus optimizing the duration of computer processing of signals and images. There are presented the results of computer simulation of a task confirming the theoretical conclusions. The task is meaningful for the purposes of Rician data processing, in particular, magnetic-resonance visualization.

    Views (last year): 2.
  4. The paper provides a solution of the two-parameter task of joint signal and noise estimation at data analysis within the conditions of the Rice distribution by the techniques of mathematical statistics: the maximum likelihood method and the variants of the method of moments. The considered variants of the method of moments include the following techniques: the joint signal and noise estimation on the basis of measuring the 2-nd and the 4-th moments (MM24) and on the basis of measuring the 1-st and the 2-nd moments (MM12). For each of the elaborated methods the explicit equations’ systems have been obtained for required parameters of the signal and noise. An important mathematical result of the investigation consists in the fact that the solution of the system of two nonlinear equations with two variables — the sought for signal and noise parameters — has been reduced to the solution of just one equation with one unknown quantity what is important from the view point of both the theoretical investigation of the proposed technique and its practical application, providing the possibility of essential decreasing the calculating resources required for the technique’s realization. The implemented theoretical analysis has resulted in an important practical conclusion: solving the two-parameter task does not lead to the increase of required numerical resources if compared with the one-parameter approximation. The task is meaningful for the purposes of the rician data processing, in particular — the image processing in the systems of magnetic-resonance visualization. The theoretical conclusions have been confirmed by the results of the numerical experiment.

    Views (last year): 2. Citations: 2 (RSCI).
  5. Ivanov S.D.
    Web-based interactive registry of the geosensors
    Computer Research and Modeling, 2016, v. 8, no. 4, pp. 621-632

    Selection and correct applying of the geosensor — the instrument of mineral geothermobarometry is challenging because of the wide variety of existing geosensors on the one hand and the availability of specific requirements for their use on the other. In this paper, organization of the geosensors within the computer system called interactive registry was proposed for reducing the labor intensity of the geosensors usage and providing information support for them. The article provides a formal description of the thermodynamic geosensor, as a function of the minerals composition and independent parameters, as well as the basic steps of pressure and temperature estimation which are common for all geosensors: conversion to the formula units, calculation of the additional parameters and the calculation of the required values. Existing collections of geosensors made as standalone applications, or as spreadsheets was examined for advantages and disadvantages of these approaches. Additional information necessary to use the geosensor was described: paragenesis, accuracy and range of parameter values, reference and others. Implementation of the geosensors registry as the webbased application which uses wiki technology was proposed. Usage of the wiki technology allows to effectively organize not so well formalized additional information about the geosensor and it’s algorithm which had written in a programming language into a single information system. For information organization links, namespaces and wiki markup was used. The article discusses the implementation of the applications on the top of DokuWiki system with specially designed RESTful server, allowing users to apply the geosensors from the registry to their own data. Programming language R uses as a geosensors description language. RServe server uses for calculations. The unittest for each geosensor allows to check the correctness of it’s implementation. The user interface of the application was developed as DokuWiki plug-in. The example of usage was given. In the article conclusion, the questions of the application security, performance and scaling was discussed.

    Views (last year): 5.
  6. Fisher J.V., Schelyaev A.E.
    Verification of calculated characteristics of supersonic turbulent jets
    Computer Research and Modeling, 2017, v. 9, no. 1, pp. 21-35

    Verification results of supersonic turbulent jets computational characteristics are presented. Numerical simulation of axisymmetric nozzle operating is realized using FlowVision CFD. Open test cases for CFD are used. The test cases include Seiner tests with exit Mach number of 2.0 both fully-expanded and under-expanded $(P/P_0 = 1.47)$. Fully-expanded nozzle investigated with wide range of flow temperature (300…3000 K). The considered studies include simulation downstream from the nozzle exit diameter. Next numerical investigation is presented at an exit Mach number of 2.02 and a free-stream Mach number of 2.2. Geometric model of convergent- divergent nozzle rebuilt from original Putnam experiment. This study is set with nozzle pressure ratio of 8.12 and total temperature of 317 K.

    The paper provides a comparison of obtained FlowVision results with experimental data and another current CFD studies. A comparison of the calculated characteristics and experimental data indicates a good agreement. The best coincidence with Seiner's experimental velocity distribution (about 7 % at far field for the first case) obtained using two-equation $k–\varepsilon$ standard turbulence model with Wilcox compressibility correction. Predicted Mach number distribution at $Y/D = 1$ for Putnam nozzle presents accuracy of 3 %.

    General guidelines for simulation of supersonic turbulent jets in the FlowVision software are formulated in the given paper. Grid convergence determined the optimal cell rate. In order to calculate the design regime, it is recommended to build a grid, containing not less than 40 cells from the axis of symmetry to the nozzle wall. In order to calculate an off-design regime, it is necessary to resolve the shock waves. For this purpose, not less than 80 cells is required in the radial direction. Investigation of the influence of turbulence model on the flow characteristics has shown that the version of the SST $k–\omega$ turbulence model implemented in the FlowVision software essentially underpredicts the axial velocity. The standard $k–\varepsilon$ model without compressibility correction also underpredicts the axial velocity. These calculations agree well with calculations in other CFD codes using the standard $k–\varepsilon$ model. The in-home $k–\varepsilon$ turbulence model KEFV with compressibility correction a little bit overpredicts the axial velocity. Since, the best results are obtained using the standard $k–\varepsilon$ model combined with the Wilcox compressibility correction, this model is recommended for the problems discussed.

    The developed methodology can be regarded as a basis for numerical investigations of more complex nozzle flows.

    Views (last year): 43.
  7. Sviridenko A.B.
    Direct multiplicative methods for sparse matrices. Newton methods
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 679-703

    We consider a numerically stable direct multiplicative algorithm of solving linear equations systems, which takes into account the sparseness of matrices presented in a packed form. The advantage of the algorithm is the ability to minimize the filling of the main rows of multipliers without losing the accuracy of the results. Moreover, changes in the position of the next processed row of the matrix are not made, what allows using static data storage formats. Linear system solving by a direct multiplicative algorithm is, like the solving with $LU$-decomposition, just another scheme of the Gaussian elimination method implementation.

    In this paper, this algorithm is the basis for solving the following problems:

    Problem 1. Setting the descent direction in Newtonian methods of unconditional optimization by integrating one of the known techniques of constructing an essentially positive definite matrix. This approach allows us to weaken or remove additional specific difficulties caused by the need to solve large equation systems with sparse matrices presented in a packed form.

    Problem 2. Construction of a new mathematical formulation of the problem of quadratic programming and a new form of specifying necessary and sufficient optimality conditions. They are quite simple and can be used to construct mathematical programming methods, for example, to find the minimum of a quadratic function on a polyhedral set of constraints, based on solving linear equations systems, which dimension is not higher than the number of variables of the objective function.

    Problem 3. Construction of a continuous analogue of the problem of minimizing a real quadratic polynomial in Boolean variables and a new form of defining necessary and sufficient conditions of optimality for the development of methods for solving them in polynomial time. As a result, the original problem is reduced to the problem of finding the minimum distance between the origin and the angular point of a convex polyhedron, which is a perturbation of the $n$-dimensional cube and is described by a system of double linear inequalities with an upper triangular matrix of coefficients with units on the main diagonal. Only two faces are subject to investigation, one of which or both contains the vertices closest to the origin. To calculate them, it is sufficient to solve $4n – 4$ linear equations systems and choose among them all the nearest equidistant vertices in polynomial time. The problem of minimizing a quadratic polynomial is $NP$-hard, since an $NP$-hard problem about a vertex covering for an arbitrary graph comes down to it. It follows therefrom that $P = NP$, which is based on the development beyond the limits of integer optimization methods.

    Views (last year): 7. Citations: 1 (RSCI).
  8. Verentsov S.I., Magerramov E.A., Vinogradov V.A., Gizatullin R.I., Alekseenko A.E., Kholodov Y.A.
    Bayesian localization for autonomous vehicle using sensor fusion and traffic signs
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 295-303

    The localization of a vehicle is an important task in the field of intelligent transportation systems. It is well known that sensor fusion helps to create more robust and accurate systems for autonomous vehicles. Standard approaches, like extended Kalman Filter or Particle Filter, are inefficient in case of highly non-linear data or have high computational cost, which complicates using them in embedded systems. Significant increase of precision, especially in case when GPS (Global Positioning System) is unavailable, may be achieved by using landmarks with known location — such as traffic signs, traffic lights, or SLAM (Simultaneous Localization and Mapping) features. However, this approach may be inapplicable if a priori locations are unknown or not accurate enough. We suggest a new approach for refining coordinates of a vehicle by using landmarks, such as traffic signs. Core part of the suggested system is the Bayesian framework, which refines vehicle location using external data about the previous traffic signs detections, collected with crowdsourcing. This paper presents an approach that combines trajectories built using global coordinates from GPS and relative coordinates from Inertial Measurement Unit (IMU) to produce a vehicle's trajectory in an unknown environment. In addition, we collected a new dataset, including from smartphone GPS and IMU sensors, video feed from windshield camera, which were recorded during 4 car rides on the same route. Also, we collected precise location data from Real Time Kinematic Global Navigation Satellite System (RTK-GNSS) device, which can be used for validation. This RTK-GNSS system was used to collect precise data about the traffic signs locations on the route as well. The results show that the Bayesian approach helps with the trajectory correction and gives better estimations with the increase of the amount of the prior information. The suggested method is efficient and requires, apart from the GPS/IMU measurements, only information about the vehicle locations during previous traffic signs detections.

    Views (last year): 22.
  9. Sviridenko A.B.
    Direct multiplicative methods for sparse matrices. Quadratic programming
    Computer Research and Modeling, 2018, v. 10, no. 4, pp. 407-420

    A numerically stable direct multiplicative method for solving systems of linear equations that takes into account the sparseness of matrices presented in a packed form is considered. The advantage of the method is the calculation of the Cholesky factors for a positive definite matrix of the system of equations and its solution within the framework of one procedure. And also in the possibility of minimizing the filling of the main rows of multipliers without losing the accuracy of the results, and no changes are made to the position of the next processed row of the matrix, which allows using static data storage formats. The solution of the system of linear equations by a direct multiplicative algorithm is, like the solution with LU-decomposition, just another scheme for implementing the Gaussian elimination method.

    The calculation of the Cholesky factors for a positive definite matrix of the system and its solution underlies the construction of a new mathematical formulation of the unconditional problem of quadratic programming and a new form of specifying necessary and sufficient conditions for optimality that are quite simple and are used in this paper to construct a new mathematical formulation for the problem of quadratic programming on a polyhedral set of constraints, which is the problem of finding the minimum distance between the origin ordinate and polyhedral boundary by means of a set of constraints and linear algebra dimensional geometry.

    To determine the distance, it is proposed to apply the known exact method based on solving systems of linear equations whose dimension is not higher than the number of variables of the objective function. The distances are determined by the construction of perpendiculars to the faces of a polyhedron of different dimensions. To reduce the number of faces examined, the proposed method involves a special order of sorting the faces. Only the faces containing the vertex closest to the point of the unconditional extremum and visible from this point are subject to investigation. In the case of the presence of several nearest equidistant vertices, we investigate a face containing all these vertices and faces of smaller dimension that have at least two common nearest vertices with the first face.

    Views (last year): 32.
  10. Andruschenko V.A., Maksimov F.A., Syzranova N.G.
    Simulation of flight and destruction of the Benešov bolid
    Computer Research and Modeling, 2018, v. 10, no. 5, pp. 605-618

    Comets and asteroids are recognized by the scientists and the governments of all countries in the world to be one of the most significant threats to the development and even the existence of our civilization. Preventing this threat includes studying the motion of large meteors through the atmosphere that is accompanied by various physical and chemical phenomena. Of particular interest to such studies are the meteors whose trajectories have been recorded and whose fragments have been found on Earth. Here, we study one of such cases. We develop a model for the motion and destruction of natural bodies in the Earth’s atmosphere, focusing on the Benešov bolid (EN070591), a bright meteor registered in 1991 in the Czech Republic by the European Observation System. Unique data, that includes the radiation spectra, is available for this bolid. We simulate the aeroballistics of the Benešov meteoroid and of its fragments, taking into account destruction due to thermal and mechanical processes. We compute the velocity of the meteoroid and its mass ablation using the equations of the classical theory of meteor motion, taking into account the variability of the mass ablation along the trajectory. The fragmentation of the meteoroid is considered using the model of sequential splitting and the statistical stress theory, that takes into account the dependency of the mechanical strength on the length scale. We compute air flows around a system of bodies (shards of the meteoroid) in the regime where mutual interplay between them is essential. To that end, we develop a method of simulating air flows based on a set of grids that allows us to consider fragments of various shapes, sizes, and masses, as well as arbitrary positions of the fragments relative to each other. Due to inaccuracies in the early simulations of the motion of this bolid, its fragments could not be located for about 23 years. Later and more accurate simulations have allowed researchers to locate four of its fragments rather far from the location expected earlier. Our simulations of the motion and destruction of the Benešov bolid show that its interaction with the atmosphere is affected by multiple factors, such as the mass and the mechanical strength of the bolid, the parameters of its motion, the mechanisms of destruction, and the interplay between its fragments.

    Views (last year): 24. Citations: 1 (RSCI).
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"