Результаты поиска по 'time-series':
Найдено статей: 47
  1. Lyubushin A.A., Kopylova G.N., Kasimova V.A., Taranova L.N.
    Multifractal and entropy statistics of seismic noise in Kamchatka in connection with the strongest earthquakes
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1507-1521

    The study of the properties of seismic noise in Kamchatka is based on the idea that noise is an important source of information about the processes preceding strong earthquakes. The hypothesis is considered that an increase in seismic hazard is accompanied by a simplification of the statistical structure of seismic noise and an increase in spatial correlations of its properties. The entropy of the distribution of squared wavelet coefficients, the width of the carrier of the multifractal singularity spectrum, and the Donoho – Johnstone index were used as statistics characterizing noise. The values of these parameters reflect the complexity: if a random signal is close in its properties to white noise, then the entropy is maximum, and the other two parameters are minimum. The statistics used are calculated for 6 station clusters. For each station cluster, daily median noise properties are calculated in successive 1-day time windows, resulting in an 18-dimensional (3 properties and 6 station clusters) time series of properties. To highlight the general properties of changes in noise parameters, a principal component method is used, which is applied for each cluster of stations, as a result of which the information is compressed into a 6-dimensional daily time series of principal components. Spatial noise coherences are estimated as a set of maximum pairwise quadratic coherence spectra between the principal components of station clusters in a sliding time window of 365 days. By calculating histograms of the distribution of cluster numbers in which the minimum and maximum values of noise statistics are achieved in a sliding time window of 365 days in length, the migration of seismic hazard areas was assessed in comparison with strong earthquakes with a magnitude of at least 7.

  2. Kondratyev M.A.
    Forecasting methods and models of disease spread
    Computer Research and Modeling, 2013, v. 5, no. 5, pp. 863-882

    The number of papers addressing the forecasting of the infectious disease morbidity is rapidly growing due to accumulation of available statistical data. This article surveys the major approaches for the shortterm and the long-term morbidity forecasting. Their limitations and the practical application possibilities are pointed out. The paper presents the conventional time series analysis methods — regression and autoregressive models; machine learning-based approaches — Bayesian networks and artificial neural networks; case-based reasoning; filtration-based techniques. The most known mathematical models of infectious diseases are mentioned: classical equation-based models (deterministic and stochastic), modern simulation models (network and agent-based).

    Views (last year): 71. Citations: 19 (RSCI).
  3. Priadein R.B., Stepantsov M.Y.
    On a possible approach to a sport game with discrete time simulation
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 271-279

    The paper proposes an approach to simulation of a sport game, consisting of a discrete set of separate competitions. According to this approach, such a competition is considered as a random processes, generally — a non-Markov’s one. At first we treat the flow of the game as a Markov’s process, obtaining recursive relationship between the probabilities of achieving certain states of score in a tennis match, as well as secondary indicators of the game, such as expectation and variance of the number of serves to finish the game. Then we use a simulation system, modeling the match, to allow an arbitrary change of the probabilities of the outcomes in the competitions that compose the match. We, for instance, allow the probabilities to depend on the results of previous competitions. Therefore, this paper deals with a modification of the model, previously proposed by the authors for sports games with continuous time.

    The proposed approach allows to evaluate not only the probability of the final outcome of the match, but also the probabilities of reaching each of the possible intermediate results, as well as secondary indicators of the game, such as the number of separate competitions it takes to finish the match. The paper includes a detailed description of the construction of a simulation system for a game of a tennis match. Then we consider simulating a set and the whole tennis match by analogy. We show some statements concerning fairness of tennis serving rules, understood as independence of the outcome of a competition on the right to serve first. We perform simulation of a cancelled ATP series match, obtaining its most probable intermediate and final outcomes for three different possible variants of the course of the match.

    The main result of this paper is the developed method of simulation of the match, applicable not only to tennis, but also to other types of sports games with discrete time.

    Views (last year): 9.
  4. Beloborodova E.I., Tamm M.V.
    On some properties of short-wave statistics of FOREX time series
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 657-669

    Financial mathematics is one of the most natural applications for the statistical analysis of time series. Financial time series reflect simultaneous activity of a large number of different economic agents. Consequently, one expects that methods of statistical physics and the theory of random processes can be applied to them.

    In this paper, we provide a statistical analysis of time series of the FOREX currency market. Of particular interest is the comparison of the time series behavior depending on the way time is measured: physical time versus trading time measured in the number of elementary price changes (ticks). The experimentally observed statistics of the time series under consideration (euro–dollar for the first half of 2007 and for 2009 and British pound – dollar for 2007) radically differs depending on the choice of the method of time measurement. When measuring time in ticks, the distribution of price increments can be well described by the normal distribution already on a scale of the order of ten ticks. At the same time, when price increments are measured in real physical time, the distribution of increments continues to differ radically from the normal up to scales of the order of minutes and even hours.

    To explain this phenomenon, we investigate the statistical properties of elementary increments in price and time. In particular, we show that the distribution of time between ticks for all three time series has a long (1-2 orders of magnitude) power-law tails with exponential cutoff at large times. We obtained approximate expressions for the distributions of waiting times for all three cases. Other statistical characteristics of the time series (the distribution of elementary price changes, pair correlation functions for price increments and for waiting times) demonstrate fairly simple behavior. Thus, it is the anomalously wide distribution of the waiting times that plays the most important role in the deviation of the distribution of increments from the normal. As a result, we discuss the possibility of applying a continuous time random walk (CTRW) model to describe the FOREX time series.

    Views (last year): 10.
  5. Belotelov N.V., Apal’kova T.G., Mamkin V.V., Kurbatova Y.A., Olchev A.V.
    Some relationships between thermodynamic characteristics and water vapor and carbon dioxide fluxes in a recently clear-cut area
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 965-980

    The temporal variability of exergy of short-wave and long-wave radiation and its relationships with sensible heat, water vapor (H2O) and carbon dioxide (CO2) fluxes on a recently clear-cut area in a mixed coniferous and small-leaved forest in the Tver region is discussed. On the basis of the analysis of radiation and exergy efficiency coefficients suggested by Yu.M. Svirezhev it was shown that during the first eight months after clearcutting the forest ecosystem functions as a "heat engine" i.e. the processes of energy dissipation dominated over processes of biomass production. To validate the findings the statistical analysis of temporary variability of meteorological parameters, as well as, daily fluxes of sensible heat, H2O and CO2 was provided using the trigonometrical polynomials. The statistical models that are linearly depended on an exergy of short-wave and long-wave radiation were obtained for mean daily values of CO2 fluxes, gross primary production of regenerated vegetation and sensible heat fluxes. The analysis of these dependences is also confirmed the results obtained from processing the radiation and exergy efficiency coefficients. The splitting the time series into separate time intervals, e.g. “spring–summer” and “summer–autumn”, allowed revealing that the statistically significant relationships between atmospheric fluxes and exergy were amplified in summer months as the clear-cut area was overgrown by grassy and young woody vegetation. The analysis of linear relationships between time-series of latent heat fluxes and exergy showed their statistical insignificance. The linear relationships between latent heat fluxes and temperature were in turn statistically significant. The air temperature was a key factor improving the accuracy of the models, whereas effect of exergy was insignificant. The results indicated that at the time of active vegetation regeneration within the clear-cut area the seasonal variability of surface evaporation is mainly governed by temperature variation.

    Views (last year): 15. Citations: 1 (RSCI).
  6. Sokolov A.V., Mamkin V.V., Avilov V.K., Tarasov D.L., Kurbatova Y.A., Olchev A.V.
    Application of a balanced identification method for gap-filling in CO2 flux data in a sphagnum peat bog
    Computer Research and Modeling, 2019, v. 11, no. 1, pp. 153-171

    The method of balanced identification was used to describe the response of Net Ecosystem Exchange of CO2 (NEE) to change of environmental factors, and to fill the gaps in continuous CO2 flux measurements in a sphagnum peat bog in the Tver region. The measurements were provided in the peat bog by the eddy covariance method from August to November of 2017. Due to rainy weather conditions and recurrent periods with low atmospheric turbulence the gap proportion in measured CO2 fluxes at our experimental site during the entire period of measurements exceeded 40%. The model developed for the gap filling in long-term experimental data considers the NEE as a difference between Ecosystem Respiration (RE) and Gross Primary Production (GPP), i.e. key processes of ecosystem functioning, and their dependence on incoming solar radiation (Q), soil temperature (T), water vapor pressure deficit (VPD) and ground water level (WL). Applied for this purpose the balanced identification method is based on the search for the optimal ratio between the model simplicity and the data fitting accuracy — the ratio providing the minimum of the modeling error estimated by the cross validation method. The obtained numerical solutions are characterized by minimum necessary nonlinearity (curvature) that provides sufficient interpolation and extrapolation characteristics of the developed models. It is particularly important to fill the missing values in NEE measurements. Reviewing the temporary variability of NEE and key environmental factors allowed to reveal a statistically significant dependence of GPP on Q, T, and VPD, and RE — on T and WL, respectively. At the same time, the inaccuracy of applied method for simulation of the mean daily NEE, was less than 10%, and the error in NEE estimates by the method was higher than by the REddyProc model considering the influence on NEE of fewer number of environmental parameters. Analyzing the gap-filled time series of NEE allowed to derive the diurnal and inter-daily variability of NEE and to obtain cumulative CO2 fluxs in the peat bog for selected summer-autumn period. It was shown, that the rate of CO2 fixation by peat bog vegetation in August was significantly higher than the rate of ecosystem respiration, while since September due to strong decrease of GPP the peat bog was turned into a consistent source of CO2 for the atmosphere.

    Views (last year): 19.
  7. Koganov A.V., Rakcheeva T.A., Prikhodko D.I.
    Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327

    The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.

    Views (last year): 16.
  8. Lyubushin A.A., Rodionov E.A.
    Analysis of predictive properties of ground tremor using Huang decomposition
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 939-958

    A method is proposed for analyzing the tremor of the earth’s surface, measured by means of space geodesy, in order to highlight the prognostic effects of seismicity activation. The method is illustrated by the example of a joint analysis of a set of synchronous time series of daily vertical displacements of the earth’s surface on the Japanese Islands for the time interval 2009–2023. The analysis is based on dividing the source data (1047 time series) into blocks (clusters of stations) and sequentially applying the principal component method. The station network is divided into clusters using the K-means method from the maximum pseudo-F-statistics criterion, and for Japan the optimal number of clusters was chosen to be 15. The Huang decomposition method into a sequence of independent empirical oscillation modes (EMD — Empirical Mode Decomposition) is applied to the time series of principal components from station blocks. To provide the stability of estimates of the waveforms of the EMD decomposition, averaging of 1000 independent additive realizations of white noise of limited amplitude was performed. Using the Cholesky decomposition of the covariance matrix of the waveforms of the first three EMD components in a sliding time window, indicators of abnormal tremor behavior were determined. By calculating the correlation function between the average indicators of anomalous behavior and the released seismic energy in the vicinity of the Japanese Islands, it was established that bursts in the measure of anomalous tremor behavior precede emissions of seismic energy. The purpose of the article is to clarify common hypotheses that movements of the earth’s crust recorded by space geodesy may contain predictive information. That displacements recorded by geodetic methods respond to the effects of earthquakes is widely known and has been demonstrated many times. But isolating geodetic effects that predict seismic events is much more challenging. In our paper, we propose one method for detecting predictive effects in space geodesy data.

  9. Methi G., Kumar A.
    Numerical Solution of Linear and Higher-order Delay Differential Equations using the Coded Differential Transform Method
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1091-1099

    The aim of the paper is to obtain a numerical solution for linear and higher-order delay differential equations (DDEs) using the coded differential transform method (CDTM). The CDTM is developed and applied to delay problems to show the efficiency of the proposed method. The coded differential transform method is a combination of the differential transform method and Mathematica software. We construct recursive relations for a few delay problems, which results in simultaneous equations, and solve them to obtain various series solution terms using the coded differential transform method. The numerical solution obtained by CDTM is compared with an exact solution. Numerical results and error analysis are presented for delay differential equations to show that the proposed method is suitable for solving delay differential equations. It is established that the delay differential equations under discussion are solvable in a specific domain. The error between the CDTM solution and the exact solution becomes very small if more terms are included in the series solution. The coded differential transform method reduces complex calculations, avoids discretization, linearization, and saves calculation time. In addition, it is easy to implement and robust. Error analysis shows that CDTM is consistent and converges fast. We obtain more accurate results using the coded differential transform method as compared to other methods.

  10. This article solves the problem of developing a technology for collecting initial data for building models for assessing the functional state of a person. This condition is assessed by the pupil response of a person to a change in illumination based on the pupillometry method. This method involves the collection and analysis of initial data (pupillograms), presented in the form of time series characterizing the dynamics of changes in the human pupils to a light impulse effect. The drawbacks of the traditional approach to the collection of initial data using the methods of computer vision and smoothing of time series are analyzed. Attention is focused on the importance of the quality of the initial data for the construction of adequate mathematical models. The need for manual marking of the iris and pupil circles is updated to improve the accuracy and quality of the initial data. The stages of the proposed technology for collecting initial data are described. An example of the obtained pupillogram is given, which has a smooth shape and does not contain outliers, noise, anomalies and missing values. Based on the presented technology, a software and hardware complex has been developed, which is a collection of special software with two main modules, and hardware implemented on the basis of a Raspberry Pi 4 Model B microcomputer, with peripheral equipment that implements the specified functionality. To evaluate the effectiveness of the developed technology, models of a single-layer perspetron and a collective of neural networks are used, for the construction of which the initial data on the functional state of intoxication of a person were used. The studies have shown that the use of manual marking of the initial data (in comparison with automatic methods of computer vision) leads to a decrease in the number of errors of the 1st and 2nd years of the kind and, accordingly, to an increase in the accuracy of assessing the functional state of a person. Thus, the presented technology for collecting initial data can be effectively used to build adequate models for assessing the functional state of a person by pupillary response to changes in illumination. The use of such models is relevant in solving individual problems of ensuring transport security, in particular, monitoring the functional state of drivers.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"