Результаты поиска по 'filtering':
Найдено статей: 13
  1. Belean B., Belean C., Floare C., Varodi C., Bot A., Adam G.
    Grid based high performance computing in satellite imagery. Case study — Perona–Malik filter
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 399-406

    The present paper discusses an approach to the efficient satellite image processing which involves two steps. The first step assumes the distribution of the steadily increasing volume of satellite collected data through a Grid infrastructure. The second step assumes the acceleration of the solution of the individual tasks related to image processing by implementing execution codes which make heavy use of spatial and temporal parallelism. An instance of such execution code is the image processing by means of the iterative Perona–Malik filter within FPGA application specific hardware architecture.

    Views (last year): 3.
  2. Verentsov S.I., Magerramov E.A., Vinogradov V.A., Gizatullin R.I., Alekseenko A.E., Kholodov Y.A.
    Bayesian localization for autonomous vehicle using sensor fusion and traffic signs
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 295-303

    The localization of a vehicle is an important task in the field of intelligent transportation systems. It is well known that sensor fusion helps to create more robust and accurate systems for autonomous vehicles. Standard approaches, like extended Kalman Filter or Particle Filter, are inefficient in case of highly non-linear data or have high computational cost, which complicates using them in embedded systems. Significant increase of precision, especially in case when GPS (Global Positioning System) is unavailable, may be achieved by using landmarks with known location — such as traffic signs, traffic lights, or SLAM (Simultaneous Localization and Mapping) features. However, this approach may be inapplicable if a priori locations are unknown or not accurate enough. We suggest a new approach for refining coordinates of a vehicle by using landmarks, such as traffic signs. Core part of the suggested system is the Bayesian framework, which refines vehicle location using external data about the previous traffic signs detections, collected with crowdsourcing. This paper presents an approach that combines trajectories built using global coordinates from GPS and relative coordinates from Inertial Measurement Unit (IMU) to produce a vehicle's trajectory in an unknown environment. In addition, we collected a new dataset, including from smartphone GPS and IMU sensors, video feed from windshield camera, which were recorded during 4 car rides on the same route. Also, we collected precise location data from Real Time Kinematic Global Navigation Satellite System (RTK-GNSS) device, which can be used for validation. This RTK-GNSS system was used to collect precise data about the traffic signs locations on the route as well. The results show that the Bayesian approach helps with the trajectory correction and gives better estimations with the increase of the amount of the prior information. The suggested method is efficient and requires, apart from the GPS/IMU measurements, only information about the vehicle locations during previous traffic signs detections.

    Views (last year): 22.
  3. Chukanov S.N.
    Comparison of complex dynamical systems based on topological data analysis
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 513-525

    The paper considers the possibility of comparing and classifying dynamical systems based on topological data analysis. Determining the measures of interaction between the channels of dynamic systems based on the HIIA (Hankel Interaction Index Array) and PM (Participation Matrix) methods allows you to build HIIA and PM graphs and their adjacency matrices. For any linear dynamic system, an approximating directed graph can be constructed, the vertices of which correspond to the components of the state vector of the dynamic system, and the arcs correspond to the measures of mutual influence of the components of the state vector. Building a measure of distance (proximity) between graphs of different dynamic systems is important, for example, for identifying normal operation or failures of a dynamic system or a control system. To compare and classify dynamic systems, weighted directed graphs corresponding to dynamic systems are preliminarily formed with edge weights corresponding to the measures of interaction between the channels of the dynamic system. Based on the HIIA and PM methods, matrices of measures of interaction between the channels of dynamic systems are determined. The paper gives examples of the formation of weighted directed graphs for various dynamic systems and estimation of the distance between these systems based on topological data analysis. An example of the formation of a weighted directed graph for a dynamic system corresponding to the control system for the components of the angular velocity vector of an aircraft, which is considered as a rigid body with principal moments of inertia, is given. The method of topological data analysis used in this work to estimate the distance between the structures of dynamic systems is based on the formation of persistent barcodes and persistent landscape functions. Methods for comparing dynamic systems based on topological data analysis can be used in the classification of dynamic systems and control systems. The use of traditional algebraic topology for the analysis of objects does not allow obtaining a sufficient amount of information due to a decrease in the data dimension (due to the loss of geometric information). Methods of topological data analysis provide a balance between reducing the data dimension and characterizing the internal structure of an object. In this paper, topological data analysis methods are used, based on the use of Vietoris-Rips and Dowker filtering to assign a geometric dimension to each topological feature. Persistent landscape functions are used to map the persistent diagrams of the method of topological data analysis into the Hilbert space and then quantify the comparison of dynamic systems. Based on the construction of persistent landscape functions, we propose a comparison of graphs of dynamical systems and finding distances between dynamical systems. For this purpose, weighted directed graphs corresponding to dynamical systems are preliminarily formed. Examples of finding the distance between objects (dynamic systems) are given.

  4. Aksyonov K.V., Alekseev V.P.
    Digital signals filtering in continuous entry data mode operation
    Computer Research and Modeling, 2012, v. 4, no. 1, pp. 55-61

    The article is dedicated to choose of method for digital signal filtering with continuous 'on-line' data entry and to use of filtration algorithm based on the fast wavelet transform for special problem.

    Views (last year): 6. Citations: 7 (RSCI).
  5. Usanov M.S., Kulberg N.S., Morozov S.P.
    Development of anisotropic nonlinear noise-reduction algorithm for computed tomography data with context dynamic threshold
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 233-248

    The article deals with the development of the noise-reduction algorithm based on anisotropic nonlinear data filtering of computed tomography (CT). Analysis of domestic and foreign literature has shown that the most effective algorithms for noise reduction of CT data use complex methods for analyzing and processing data, such as bilateral, adaptive, three-dimensional and other types of filtrations. However, a combination of such techniques is rarely used in practice due to long processing time per slice. In this regard, it was decided to develop an efficient and fast algorithm for noise-reduction based on simplified bilateral filtration method with three-dimensional data accumulation. The algorithm was developed on C ++11 programming language in Microsoft Visual Studio 2015. The main difference of the developed noise reduction algorithm is the use an improved mathematical model of CT noise, based on the distribution of Poisson and Gauss from the logarithmic value, developed earlier by our team. This allows a more accurate determination of the noise level and, thus, the threshold of data processing. As the result of the noise reduction algorithm, processed CT data with lower noise level were obtained. Visual evaluation of the data showed the increased information content of the processed data, compared to original data, the clarity of the mapping of homogeneous regions, and a significant reduction in noise in processing areas. Assessing the numerical results of the algorithm showed a decrease in the standard deviation (SD) level by more than 6 times in the processed areas, and high rates of the determination coefficient showed that the data were not distorted and changed only due to the removal of noise. Usage of newly developed context dynamic threshold made it possible to decrease SD level on every area of data. The main difference of the developed threshold is its simplicity and speed, achieved by preliminary estimation of the data array and derivation of the threshold values that are put in correspondence with each pixel of the CT. The principle of its work is based on threshold criteria, which fits well both into the developed noise reduction algorithm based on anisotropic nonlinear filtration, and another algorithm of noise-reduction. The algorithm successfully functions as part of the MultiVox workstation and is being prepared for implementation in a single radiological network of the city of Moscow.

    Views (last year): 21.
  6. Grigorieva A.V., Maksimenko M.V.
    Method for processing acoustic emission testing data to define signal velocity and location
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1029-1040

    Non-destructive acoustic emission testing is an effective and cost-efficient way to examine pressure vessels for hidden defects (cracks, laminations etc.), as well as the only method that is sensitive to developing defects. The sound velocity in the test object and its adequate definition in the location scheme are of paramount importance for the accurate detection of the acoustic emission source. The acoustic emission data processing method proposed herein comprises a set of numerical methods and allows defining the source coordinates and the most probable velocity for each signal. The method includes pre-filtering of data by amplitude, by time differences, elimination of electromagnetic interference. Further, a set of numerical methods is applied to them to solve the system of nonlinear equations, in particular, the Newton – Kantorovich method and the general iterative process. The velocity of a signal from one source is assumed as a constant in all directions. As the initial approximation is taken the center of gravity of the triangle formed by the first three sensors that registered the signal. The method developed has an important practical application, and the paper provides an example of its approbation in the calibration of an acoustic emission system at a production facility (hydrocarbon gas purification absorber). Criteria for prefiltering of data are described. The obtained locations are in good agreement with the signal generation sources, and the velocities even reflect the Rayleigh-Lamb division of acoustic waves due to the different signal source distances from the sensors. The article contains the dependency graph of the average signal velocity against the distance from its source to the nearest sensor. The main advantage of the method developed is its ability to detect the location of different velocity signals within a single test. This allows to increase the degree of freedom in the calculations, and thereby increase their accuracy.

  7. Chubatov A.A., Karmazin V.N.
    The stable estimation of intensity of atmospheric pollution source on the base of sequential function specification method
    Computer Research and Modeling, 2009, v. 1, no. 4, pp. 391-403

    The approach given in this work helps to organize the operative control over action intensity of pollution emissions in atmosphere. The approach allows to sequential estimate of unknown intensity of atmospheric pollution source on the base of concentration measurements of impurity in several stationary control points is offered in the work. The inverse problem was solved by means of the step-by-step regularization and the sequential function specification method. The solution is presented in the form of the digital filter in terms of Hamming. The fitting algorithm of regularization parameter r for function specification method is described.

    Views (last year): 2.
  8. Vrazhnov D.A., Shapovalov A.V., Nikolaev V.V.
    Symmetries of differential equations in computer vision applications
    Computer Research and Modeling, 2010, v. 2, no. 4, pp. 369-376

    In our work we present generalization of well-known approach for construction of invariant feature vectors of images in computer vision applications. Basic feature of the suggested algorithm is replacement of commonly used Gaussian filter by convolution of image function with Green’s function of evolution operator, which inherits symmetries of this operator. The use of general filtration allows to obtain additional characteristics of invariant feature vectors.

    Views (last year): 8. Citations: 4 (RSCI).
  9. Cheremisina E.N., Senner A.E.
    The use of GIS INTEGRO in searching tasks for oil and gas deposits
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 439-444

    GIS INTEGRO is the geo-information software system forming the basis for the integrated interpretation of geophysical data in researching a deep structure of Earth. GIS INTEGRO combines a variety of computational and analytical applications for the solution of geological and geophysical problems. It includes various interfaces that allow you to change the form of representation of data (raster, vector, regular and irregular network of observations), the conversion unit of map projections, application blocks, including block integrated data analysis and decision prognostic and diagnostic tasks.

    The methodological approach is based on integration and integrated analysis of geophysical data on regional profiles, geophysical potential fields and additional geological information on the study area. Analytical support includes packages transformations, filtering, statistical processing, calculation, finding of lineaments, solving direct and inverse tasks, integration of geographic information.

    Technology and software and analytical support was tested in solving problems tectonic zoning in scale 1:200000, 1:1000000 in Yakutia, Kazakhstan, Rostov region, studying the deep structure of regional profiles 1:S, 1-SC, 2-SAT, 3-SAT and 2-DV, oil and gas forecast in the regions of Eastern Siberia, Brazil.

    The article describes two possible approaches of parallel calculations for data processing 2D or 3D nets in the field of geophysical research. As an example presented realization in the environment of GRID of the application software ZondGeoStat (statistical sensing), which create 3D net model on the basis of data 2d net. The experience has demonstrated the high efficiency of the use of environment of GRID during realization of calculations in field of geophysical researches.

    Views (last year): 4.
  10. Ososkov G.A., Bakina O.V., Baranov D.A., Goncharov P.V., Denisenko I.I., Zhemchugov A.S., Nefedov Y.A., Nechaevskiy A.V., Nikolskaya A.N., Shchavelev E.M., Wang L., Sun S., Zhang Y.
    Tracking on the BESIII CGEM inner detector using deep learning
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1361-1381

    The reconstruction of charged particle trajectories in tracking detectors is a key problem in the analysis of experimental data for high energy and nuclear physics.

    The amount of data in modern experiments is so large that classical tracking methods such as Kalman filter can not process them fast enough. To solve this problem, we have developed two neural network algorithms of track recognition, based on deep learning architectures, for local (track by track) and global (all tracks in an event) tracking in the GEM tracker of the BM@N experiment at JINR (Dubna). The advantage of deep neural networks is the ability to detect hidden nonlinear dependencies in data and the capability of parallel execution of underlying linear algebra operations.

    In this work we generalize these algorithms to the cylindrical GEM inner tracker of BESIII experiment. The neural network model RDGraphNet for global track finding, based on the reverse directed graph, has been successfully adapted. After training on Monte Carlo data, testing showed encouraging results: recall of 98% and precision of 86% for track finding.

    The local neural network model TrackNETv2 was also adapted to BESIII CGEM successfully. Since the tracker has only three detecting layers, an additional neuro-classifier to filter out false tracks have been introduced. Preliminary tests demonstrated the recall value at the first stage of 99%. After applying the neuro-classifier, the precision was 77% with a slight decrease of the recall to 94%. This result can be improved after the further model optimization.

Pages: next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"