Результаты поиска по 'signal processing':
Найдено статей: 31
  1. Grigorieva A.V., Maksimenko M.V.
    Method for processing acoustic emission testing data to define signal velocity and location
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1029-1040

    Non-destructive acoustic emission testing is an effective and cost-efficient way to examine pressure vessels for hidden defects (cracks, laminations etc.), as well as the only method that is sensitive to developing defects. The sound velocity in the test object and its adequate definition in the location scheme are of paramount importance for the accurate detection of the acoustic emission source. The acoustic emission data processing method proposed herein comprises a set of numerical methods and allows defining the source coordinates and the most probable velocity for each signal. The method includes pre-filtering of data by amplitude, by time differences, elimination of electromagnetic interference. Further, a set of numerical methods is applied to them to solve the system of nonlinear equations, in particular, the Newton – Kantorovich method and the general iterative process. The velocity of a signal from one source is assumed as a constant in all directions. As the initial approximation is taken the center of gravity of the triangle formed by the first three sensors that registered the signal. The method developed has an important practical application, and the paper provides an example of its approbation in the calibration of an acoustic emission system at a production facility (hydrocarbon gas purification absorber). Criteria for prefiltering of data are described. The obtained locations are in good agreement with the signal generation sources, and the velocities even reflect the Rayleigh-Lamb division of acoustic waves due to the different signal source distances from the sensors. The article contains the dependency graph of the average signal velocity against the distance from its source to the nearest sensor. The main advantage of the method developed is its ability to detect the location of different velocity signals within a single test. This allows to increase the degree of freedom in the calculations, and thereby increase their accuracy.

  2. Yakovleva T.V.
    Signal and noise parameters’ determination at rician data analysis by method of moments of lower odd orders
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 717-728

    The paper develops a new mathematical method of the joint signal and noise parameters determination at the Rice statistical distribution by method of moments based upon the analysis of data for the 1-st and the 3-rd raw moments of the random rician value. The explicit equations’ system have been obtained for required parameters of the signal and noise. In the limiting case of the small value of the signal-to-noise ratio the analytical formulas have been derived that allow calculating the required parameters without the necessity of solving the equations numerically. The technique having been elaborated in the paper ensures an efficient separation of the informative and noise components of the data to be analyzed without any a-priori restrictions, just based upon the processing of the results of the signal’s sampled measurements. The task is meaningful for the purposes of the rician data processing, in particular in the systems of magnetic-resonance visualization, in ultrasound visualization systems, at the optical signals’ analysis in range measuring systems, in radio location, etc. The results of the investigation have shown that the two parameter task solution of the proposed technique does not lead to the increase in demanded volume of computing resources compared with the one parameter task being solved in approximation that the second parameter of the task is known a-priori There are provided the results of the elaborated technique’s computer simulation. The results of the signal and noise parameters’ numerical calculation have confirmed the efficiency of the elaborated technique. There has been conducted the comparison of the accuracy of the sought-for parameters estimation by the technique having been developed in this paper and by the previously elaborated method of moments based upon processing the measured data for lower even moments of the signal to be analyzed.

    Views (last year): 10. Citations: 1 (RSCI).
  3. The work is devoted to the problem of creating a model with stationary parameters using historical data under conditions of unknown disturbances. The case is considered when a representative sample of object states can be formed using historical data accumulated only over a significant period of time. It is assumed that unknown disturbances can act in a wide frequency range and may have low-frequency and trend components. In such a situation, including data from different time periods in the sample can lead to inconsistencies and greatly reduce the accuracy of the model. The paper provides an overview of approaches and methods for data harmonization. In this case, the main attention is paid to data sampling. An assessment is made of the applicability of various data sampling options as a tool for reducing the level of uncertainty. We propose a method for identifying a self-leveling object model using data accumulated over a significant period of time under conditions of unknown disturbances with a wide frequency range. The method is focused on creating a model with stationary parameters that does not require periodic reconfiguration to new conditions. The method is based on the combined use of sampling and presentation of data from individual periods of time in the form of increments relative to the initial point in time for the period. This makes it possible to reduce the number of parameters that characterize unknown disturbances with a minimum of assumptions that limit the application of the method. As a result, the dimensionality of the search problem is reduced and the computational costs associated with setting up the model are minimized. It is possible to configure both linear and, in some cases, nonlinear models. The method was used to develop a model of closed cooling of steel on a unit for continuous hot-dip galvanizing of steel strip. The model can be used for predictive control of thermal processes and for selecting strip speed. It is shown that the method makes it possible to develop a model of thermal processes from a closed cooling section under conditions of unknown disturbances, including low-frequency components.

  4. Shriethar N., Muthu M.
    Topology-based activity recognition: stratified manifolds and separability in sensor space
    Computer Research and Modeling, 2025, v. 17, no. 5, pp. 829-850

    While working on activity recognition using wearable sensors for healthcare applications, the main issue arises in the classification of activities. When we attempt to classify activities like walking, sitting, or running from accelerometer and gyroscope data, the signals often overlap and noise complicates the classification process. The existing methods do not have solid mathematical foundations to handle this issue. We started with the standard magnitude approach where one can compute $m =  \sqrt{a^2_1 + a^2_2 + a^2_3}$ from the accelerometer readings, but this approach failed because different activities ended up in overlapping regions. We therefore developed a different approach. Instead of collapsing the 6-dimensional sensor data into simple magnitudes, we keep all six dimensions and treat each activity as a rectangular box in this 6D space. We define these boxes using simple interval constraints. For example, walking occurs when the $x$-axis accelerometer reading is between $2$ and $4$, the $y$-axis reading is between $9$ and $10$, and so on. The key breakthrough is what we call a separability index $s = \frac{d_{\min}^{}}{\sigma}$ that determines how accurately the classification will work. Here dmin represents how far apart the activity boxes are, and $\sigma$ represents the amount of noise present. From this simple idea, we derive a mathematical formula $P(\text{error}) \leqslant (n-1)\exp\left(-\frac{s^2}8\right)$  that predicts the error rate even before initiating the experiment. We tested this on the standard UCI-HAR and WISDM datasets and achieved $86.1 %$ accuracy. The theoretical predictions matched the actual results within $3 %$. This approach outperforms the traditional magnitude methods by $30.6 %$ and explains why certain activities overlap with each other.

  5. Chernavskaya O.D.
    Dynamical theory of information as a basis for natural-constructive approach to modeling a cognitive process
    Computer Research and Modeling, 2017, v. 9, no. 3, pp. 433-447

    The main statements and inferences of the Dynamic Theory Information (DTI) are considered. It is shown that DTI provides the possibility two reveal two essentially important types of information: objective (unconventional) and subjective (conventional) informtion. There are two ways of obtaining information: reception (perception of an already existing one) and generation (production of new) information. It is shown that the processes of generation and perception of information should proceed in two different subsystems of the same cognitive system. The main points of the Natural-Constructivist Approach to modeling the cognitive process are discussed. It is shown that any neuromorphic approach faces the problem of Explanatory Gap between the “Brain” and the “Mind”, i. e. the gap between objectively measurable information about the ensemble of neurons (“Brain”) and subjective information about the human consciousness (“Mind”). The Natural-Constructive Cognitive Architecture developed within the framework of this approach is discussed. It is a complex block-hierarchical combination of several neuroprocessors. The main constructive feature of this architecture is splitting the whole system into two linked subsystems, by analogy with the hemispheres of the human brain. One of the subsystems is processing the new information, learning, and creativity, i.e. for the generation of information. Another subsystem is responsible for processing already existing information, i.e. reception of information. It is shown that the lowest (zero) level of the hierarchy is represented by processors that should record images of real objects (distributed memory) as a response to sensory signals, which is objective information (and refers to the “Brain”). The next hierarchy levels are represented by processors containing symbols of the recorded images. It is shown that symbols represent subjective (conventional) information created by the system itself and providing its individuality. The highest hierarchy levels containing the symbols of abstract concepts provide the possibility to interpret the concepts of “consciousness”, “sub-consciousness”, “intuition”, referring to the field of “Mind”, in terms of the ensemble of neurons. Thus, DTI provides an opportunity to build a model that allows us to trace how the “Mind” could emerge basing on the “Brain”.

    Views (last year): 6.
  6. Vetchanin E.V., Tenenev V.A., Kilin A.A.
    Optimal control of the motion in an ideal fluid of a screw-shaped body with internal rotors
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 741-759

    In this paper we consider the controlled motion of a helical body with three blades in an ideal fluid, which is executed by rotating three internal rotors. We set the problem of selecting control actions, which ensure the motion of the body near the predetermined trajectory. To determine controls that guarantee motion near the given curve, we propose methods based on the application of hybrid genetic algorithms (genetic algorithms with real encoding and with additional learning of the leader of the population by a gradient method) and artificial neural networks. The correctness of the operation of the proposed numerical methods is estimated using previously obtained differential equations, which define the law of changing the control actions for the predetermined trajectory.

    In the approach based on hybrid genetic algorithms, the initial problem of minimizing the integral functional reduces to minimizing the function of many variables. The given time interval is broken up into small elements, on each of which the control actions are approximated by Lagrangian polynomials of order 2 and 3. When appropriately adjusted, the hybrid genetic algorithms reproduce a solution close to exact. However, the cost of calculation of 1 second of the physical process is about 300 seconds of processor time.

    To increase the speed of calculation of control actions, we propose an algorithm based on artificial neural networks. As the input signal the neural network takes the components of the required displacement vector. The node values of the Lagrangian polynomials which approximately describe the control actions return as output signals . The neural network is taught by the well-known back-propagation method. The learning sample is generated using the approach based on hybrid genetic algorithms. The calculation of 1 second of the physical process by means of the neural network requires about 0.004 seconds of processor time, that is, 6 orders faster than the hybrid genetic algorithm. The control calculated by means of the artificial neural network differs from exact control. However, in spite of this difference, it ensures that the predetermined trajectory is followed exactly.

    Views (last year): 12. Citations: 1 (RSCI).
  7. Golubev V.I., Khokhlov N.I.
    Estimation of anisotropy of seismic response from fractured geological objects
    Computer Research and Modeling, 2018, v. 10, no. 2, pp. 231-240

    Seismic survey process is the common method of prospecting and exploration of deposits: oil and natural gas. Invented at the beginning of the XX century, it has received significant development and is currently used by almost all service oil companies. Its main advantages are the acceptable cost of fieldwork (in comparison with drilling wells) and the accuracy of estimating the characteristics of the subsurface area. However, with the discovery of non-traditional deposits (for example, the Arctic shelf, the Bazhenov Formation), the task of improving existing and creating new seismic data processing technologies became important. Significant development in this direction is possible with the use of numerical simulation of the propagation of seismic waves in realistic models of the geological medium, since it is possible to specify an arbitrary internal structure of the medium with subsequent evaluation of the synthetic signal-response.

    The present work is devoted to the study of spatial dynamic processes occurring in geological medium containing fractured inclusions in the process of seismic exploration. The authors constructed a three-dimensional model of a layered massif containing a layer of fluid-saturated cracks, which makes it possible to estimate the signal-response when the structure of the inhomogeneous inclusion is varied. To describe physical processes, we use a system of equations for a linearly elastic body in partial derivatives of the second order, which is solved numerically by a grid-characteristic method on hexahedral grid. In this case, the crack planes are identified at the stage of constructing the grid, and further an additional correction is used to ensure a correct seismic response for the model parameters typical for geological media.

    In the paper, three-component area seismograms with a common explosion point were obtained. On their basis, the effect of the structure of a fractured medium on the anisotropy of the seismic response recorded on the day surface at a different distance from the source was estimated. It is established that the kinematic characteristics of the signal remain constant, while the dynamic characteristics for ordered and disordered models can differ by tens of percents.

    Views (last year): 11. Citations: 4 (RSCI).
  8. Nazarov V.G., Prokhorov I.V., Yarovenko I.P.
    Identification of inhomogeneous matter by pulsed multienergy tomography methods
    Computer Research and Modeling, 2025, v. 17, no. 4, pp. 621-639

    The article considers the mathematical aspects of the problem of identifying a multicomponent scattering medium based on pulsed multienergy X-ray irradiation data. X-ray diagnostics problems are of considerable interest from both theoretical and practical points of view, and radiographic methods are indispensable in non-destructive testing of products.

    Within the framework of a mathematical model based on a non-stationary integro-differential equation of radiation transfer, the inverse problem of finding the attenuation coefficient for radiation known at the boundary of the region and the problem of identifying a substance based on the found values of the attenuation coefficient on a discrete set of irradiation energies of the medium are formulated.

    A preliminary processing of a wide list of substances of interest in computed tomography was carried out to determine the possibility of their identification by an approximately specified radiation attenuation coefficient characterizing the medium. When analyzing the degree of proximity of substances in a certain norm, it was found that the set of all possible substances potentially contained in the medium is divided into a finite number of non-intersecting clusters. For a sufficiently short duration of the probing signal, the scattering component of the radiation leaving the medium is asymptotically small. This circumstance allows us to reduce the inverse problem for the radiation transfer equation to the problem of inverting the Radon transform from the attenuation coefficient. The possibility of unambiguous or partial identification of a substance by varying the duration of the probing pulse and the number of energy levels of irradiation of the medium is analyzed using numerical modeling methods on a specially developed digital phantom.

  9. Burlakov E.A.
    Relation between performance of organization and its structure during sudden and smoldering crises
    Computer Research and Modeling, 2016, v. 8, no. 4, pp. 685-706

    The article describes a mathematical model that simulates performance of a hierarchical organization during an early stage of a crisis. A distinguished feature of this stage of crisis is presence of so called early warning signals containing information on the approaching event. Employees are capable of catching the early warnings and of preparing the organization for the crisis based on the signals’ meaning. The efficiency of the preparation depends on both parameters of the organization and parameters of the crisis. The proposed simulation agentbased model is implemented on Java programming language and is used for conducting experiments via Monte- Carlo method. The goal of the experiments is to compare how centralized and decentralized organizational structures perform during sudden and smoldering crises. By centralized organizations we assume structures with high number of hierarchy levels and low number of direct reports of every manager, while decentralized organizations mean structures with low number of hierarchy levels and high number of direct reports of every manager. Sudden crises are distinguished by short early stage and low number of warning signals, while smoldering crises are defined as crises with long lasting early stage and high number of warning signals not necessary containing important information. Efficiency of the organizational performance during early stage of a crisis is measured by two parameters: percentage of early warnings which have been acted upon in order to prepare organization for the crisis, and time spent by top-manager on working with early warnings. As a result, we show that during early stage of smoldering crises centralized organizations process signals more efficiently than decentralized organizations, while decentralized organizations handle early warning signals more efficiently during early stage of sudden crises. However, occupation of top-managers during sudden crises is higher in decentralized organizations and it is higher in centralized organizations during smoldering crises. Thus, neither of the two classes of organizational structures is more efficient by the two parameters simultaneously. Finally, we conduct sensitivity analysis to verify the obtained results.

    Views (last year): 2. Citations: 2 (RSCI).
  10. Lyubushin A.A., Farkov Y.A.
    Synchronous components of financial time series
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 639-655

    The article proposes a method of joint analysis of multidimensional financial time series based on the evaluation of the set of properties of stock quotes in a sliding time window and the subsequent averaging of property values for all analyzed companies. The main purpose of the analysis is to construct measures of joint behavior of time series reacting to the occurrence of a synchronous or coherent component. The coherence of the behavior of the characteristics of a complex system is an important feature that makes it possible to evaluate the approach of the system to sharp changes in its state. The basis for the search for precursors of sharp changes is the general idea of increasing the correlation of random fluctuations of the system parameters as it approaches the critical state. The increments in time series of stock values have a pronounced chaotic character and have a large amplitude of individual noises, against which a weak common signal can be detected only on the basis of its correlation in different scalar components of a multidimensional time series. It is known that classical methods of analysis based on the use of correlations between neighboring samples are ineffective in the processing of financial time series, since from the point of view of the correlation theory of random processes, increments in the value of shares formally have all the attributes of white noise (in particular, the “flat spectrum” and “delta-shaped” autocorrelation function). In connection with this, it is proposed to go from analyzing the initial signals to examining the sequences of their nonlinear properties calculated in time fragments of small length. As such properties, the entropy of the wavelet coefficients is used in the decomposition into the Daubechies basis, the multifractal parameters and the autoregressive measure of signal nonstationarity. Measures of synchronous behavior of time series properties in a sliding time window are constructed using the principal component method, moduli values of all pairwise correlation coefficients, and a multiple spectral coherence measure that is a generalization of the quadratic coherence spectrum between two signals. The shares of 16 large Russian companies from the beginning of 2010 to the end of 2016 were studied. Using the proposed method, two synchronization time intervals of the Russian stock market were identified: from mid-December 2013 to mid- March 2014 and from mid-October 2014 to mid-January 2016.

    Views (last year): 12. Citations: 2 (RSCI).
Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"