Результаты поиска по 'time series':
Найдено статей: 40
  1. We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.

    Views (last year): 14. Citations: 1 (RSCI).
  2. Lyubushin A.A., Farkov Y.A.
    Synchronous components of financial time series
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 639-655

    The article proposes a method of joint analysis of multidimensional financial time series based on the evaluation of the set of properties of stock quotes in a sliding time window and the subsequent averaging of property values for all analyzed companies. The main purpose of the analysis is to construct measures of joint behavior of time series reacting to the occurrence of a synchronous or coherent component. The coherence of the behavior of the characteristics of a complex system is an important feature that makes it possible to evaluate the approach of the system to sharp changes in its state. The basis for the search for precursors of sharp changes is the general idea of increasing the correlation of random fluctuations of the system parameters as it approaches the critical state. The increments in time series of stock values have a pronounced chaotic character and have a large amplitude of individual noises, against which a weak common signal can be detected only on the basis of its correlation in different scalar components of a multidimensional time series. It is known that classical methods of analysis based on the use of correlations between neighboring samples are ineffective in the processing of financial time series, since from the point of view of the correlation theory of random processes, increments in the value of shares formally have all the attributes of white noise (in particular, the “flat spectrum” and “delta-shaped” autocorrelation function). In connection with this, it is proposed to go from analyzing the initial signals to examining the sequences of their nonlinear properties calculated in time fragments of small length. As such properties, the entropy of the wavelet coefficients is used in the decomposition into the Daubechies basis, the multifractal parameters and the autoregressive measure of signal nonstationarity. Measures of synchronous behavior of time series properties in a sliding time window are constructed using the principal component method, moduli values of all pairwise correlation coefficients, and a multiple spectral coherence measure that is a generalization of the quadratic coherence spectrum between two signals. The shares of 16 large Russian companies from the beginning of 2010 to the end of 2016 were studied. Using the proposed method, two synchronization time intervals of the Russian stock market were identified: from mid-December 2013 to mid- March 2014 and from mid-October 2014 to mid-January 2016.

    Views (last year): 12. Citations: 2 (RSCI).
  3. Kutovskiy N.A., Nechaevskiy A.V., Ososkov G.A., Pryahina D.I., Trofimov V.V.
    Simulation of interprocessor interactions for MPI-applications in the cloud infrastructure
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 955-963

    А new cloud center of parallel computing is to be created in the Laboratory of Information Technologies (LIT) of the Joint Institute for Nuclear Research JINR) what is expected to improve significantly the efficiency of numerical calculations and expedite the receipt of new physically meaningful results due to the more rational use of computing resources. To optimize a scheme of parallel computations at a cloud environment it is necessary to test this scheme for various combinations of equipment parameters (processor speed and numbers, throughput оf а communication network etc). As a test problem, the parallel MPI algorithm for calculations of the long Josephson junctions (LDJ) is chosen. Problems of evaluating the impact of abovementioned factors of computing mean on the computing speed of the test problem are solved by simulation with the simulation program SyMSim developed in LIT.

    The simulation of the LDJ calculations in the cloud environment enable users without a series of test to find the optimal number of CPUs with a certain type of network run the calculations in a real computer environment. This can save significant computational time in countable resources. The main parameters of the model were obtained from the results of the computational experiment conducted on a special cloud-based testbed. Computational experiments showed that the pure computation time decreases in inverse proportion to the number of processors, but depends significantly on network bandwidth. Comparison of results obtained empirically with the results of simulation showed that the simulation model correctly simulates the parallel calculations performed using the MPI-technology. Besides it confirms our recommendation: for fast calculations of this type it is needed to increase both, — the number of CPUs and the network throughput at the same time. The simulation results allow also to invent an empirical analytical formula expressing the dependence of calculation time by the number of processors for a fixed system configuration. The obtained formula can be applied to other similar studies, but requires additional tests to determine the values of variables.

    Views (last year): 10. Citations: 1 (RSCI).
  4. Zenyuk D.A.
    Stochastic simulation of chemical reactions in subdiffusion medium
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 87-104

    Theory of anomalous diffusion, which describe a vast number of transport processes with power law mean squared displacement, is actively advancing in recent years. Diffusion of liquids in porous media, carrier transport in amorphous semiconductors and molecular transport in viscous environments are widely known examples of anomalous deceleration of transport processes compared to the standard model.

    Direct Monte Carlo simulation is a convenient tool for studying such processes. An efficient stochastic simulation algorithm is developed in the present paper. It is based on simple renewal process with interarrival times that have power law asymptotics. Analytical derivations show a deep connection between this class of random process and equations with fractional derivatives. The algorithm is further generalized by coupling it with chemical reaction simulation. It makes stochastic approach especially useful, because the exact form of integrodifferential evolution equations for reaction — subdiffusion systems is still a matter of debates.

    Proposed algorithm relies on non-markovian random processes, hence one should carefully account for qualitatively new effects. The main question is how molecules leave the system during chemical reactions. An exact scheme which tracks all possible molecule combinations for every reaction channel is computationally infeasible because of the huge number of such combinations. It necessitates application of some simple heuristic procedures. Choosing one of these heuristics greatly affects obtained results, as illustrated by a series of numerical experiments.

  5. Minnikhanov R.N., Anikin I.V., Dagaeva M.V., Faizrakhmanov E.M., Bolshakov T.E.
    Modeling of the effective environment in the Republic of Tatarstan using transport data
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 395-404

    Automated urban traffic monitoring systems are widely used to solve various tasks in intelligent transport systems of different regions. They include video enforcement, video surveillance, traffic management system, etc. Effective traffic management and rapid response to traffic incidents require continuous monitoring and analysis of information from these complexes, as well as time series forecasting for further anomaly detection in traffic flow. To increase the forecasting quality, data fusion from different sources is needed. It will reduce the forecasting error, related to possible incorrect values and data gaps. We implemented the approach for short-term and middle-term forecasting of traffic flow (5, 10, 15 min) based on data fusion from video enforcement and video surveillance systems. We made forecasting using different recurrent neural network architectures: LSTM, GRU, and bidirectional LSTM with one and two layers. We investigated the forecasting quality of bidirectional LSTM with 64 and 128 neurons in hidden layers. The input window size (1, 4, 12, 24, 48) was investigated. The RMSE value was used as a forecasting error. We got minimum RMSE = 0.032405 for basic LSTM with 64 neurons in the hidden layer and window size = 24.

  6. Koganov A.V., Rakcheeva T.A., Prikhodko D.I.
    Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586

    We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.

  7. Lyubushin A.A., Kopylova G.N., Kasimova V.A., Taranova L.N.
    Multifractal and entropy statistics of seismic noise in Kamchatka in connection with the strongest earthquakes
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1507-1521

    The study of the properties of seismic noise in Kamchatka is based on the idea that noise is an important source of information about the processes preceding strong earthquakes. The hypothesis is considered that an increase in seismic hazard is accompanied by a simplification of the statistical structure of seismic noise and an increase in spatial correlations of its properties. The entropy of the distribution of squared wavelet coefficients, the width of the carrier of the multifractal singularity spectrum, and the Donoho – Johnstone index were used as statistics characterizing noise. The values of these parameters reflect the complexity: if a random signal is close in its properties to white noise, then the entropy is maximum, and the other two parameters are minimum. The statistics used are calculated for 6 station clusters. For each station cluster, daily median noise properties are calculated in successive 1-day time windows, resulting in an 18-dimensional (3 properties and 6 station clusters) time series of properties. To highlight the general properties of changes in noise parameters, a principal component method is used, which is applied for each cluster of stations, as a result of which the information is compressed into a 6-dimensional daily time series of principal components. Spatial noise coherences are estimated as a set of maximum pairwise quadratic coherence spectra between the principal components of station clusters in a sliding time window of 365 days. By calculating histograms of the distribution of cluster numbers in which the minimum and maximum values of noise statistics are achieved in a sliding time window of 365 days in length, the migration of seismic hazard areas was assessed in comparison with strong earthquakes with a magnitude of at least 7.

  8. Kondratyev M.A.
    Forecasting methods and models of disease spread
    Computer Research and Modeling, 2013, v. 5, no. 5, pp. 863-882

    The number of papers addressing the forecasting of the infectious disease morbidity is rapidly growing due to accumulation of available statistical data. This article surveys the major approaches for the shortterm and the long-term morbidity forecasting. Their limitations and the practical application possibilities are pointed out. The paper presents the conventional time series analysis methods — regression and autoregressive models; machine learning-based approaches — Bayesian networks and artificial neural networks; case-based reasoning; filtration-based techniques. The most known mathematical models of infectious diseases are mentioned: classical equation-based models (deterministic and stochastic), modern simulation models (network and agent-based).

    Views (last year): 71. Citations: 19 (RSCI).
  9. Priadein R.B., Stepantsov M.Y.
    On a possible approach to a sport game with discrete time simulation
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 271-279

    The paper proposes an approach to simulation of a sport game, consisting of a discrete set of separate competitions. According to this approach, such a competition is considered as a random processes, generally — a non-Markov’s one. At first we treat the flow of the game as a Markov’s process, obtaining recursive relationship between the probabilities of achieving certain states of score in a tennis match, as well as secondary indicators of the game, such as expectation and variance of the number of serves to finish the game. Then we use a simulation system, modeling the match, to allow an arbitrary change of the probabilities of the outcomes in the competitions that compose the match. We, for instance, allow the probabilities to depend on the results of previous competitions. Therefore, this paper deals with a modification of the model, previously proposed by the authors for sports games with continuous time.

    The proposed approach allows to evaluate not only the probability of the final outcome of the match, but also the probabilities of reaching each of the possible intermediate results, as well as secondary indicators of the game, such as the number of separate competitions it takes to finish the match. The paper includes a detailed description of the construction of a simulation system for a game of a tennis match. Then we consider simulating a set and the whole tennis match by analogy. We show some statements concerning fairness of tennis serving rules, understood as independence of the outcome of a competition on the right to serve first. We perform simulation of a cancelled ATP series match, obtaining its most probable intermediate and final outcomes for three different possible variants of the course of the match.

    The main result of this paper is the developed method of simulation of the match, applicable not only to tennis, but also to other types of sports games with discrete time.

    Views (last year): 9.
  10. Beloborodova E.I., Tamm M.V.
    On some properties of short-wave statistics of FOREX time series
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 657-669

    Financial mathematics is one of the most natural applications for the statistical analysis of time series. Financial time series reflect simultaneous activity of a large number of different economic agents. Consequently, one expects that methods of statistical physics and the theory of random processes can be applied to them.

    In this paper, we provide a statistical analysis of time series of the FOREX currency market. Of particular interest is the comparison of the time series behavior depending on the way time is measured: physical time versus trading time measured in the number of elementary price changes (ticks). The experimentally observed statistics of the time series under consideration (euro–dollar for the first half of 2007 and for 2009 and British pound – dollar for 2007) radically differs depending on the choice of the method of time measurement. When measuring time in ticks, the distribution of price increments can be well described by the normal distribution already on a scale of the order of ten ticks. At the same time, when price increments are measured in real physical time, the distribution of increments continues to differ radically from the normal up to scales of the order of minutes and even hours.

    To explain this phenomenon, we investigate the statistical properties of elementary increments in price and time. In particular, we show that the distribution of time between ticks for all three time series has a long (1-2 orders of magnitude) power-law tails with exponential cutoff at large times. We obtained approximate expressions for the distributions of waiting times for all three cases. Other statistical characteristics of the time series (the distribution of elementary price changes, pair correlation functions for price increments and for waiting times) demonstrate fairly simple behavior. Thus, it is the anomalously wide distribution of the waiting times that plays the most important role in the deviation of the distribution of increments from the normal. As a result, we discuss the possibility of applying a continuous time random walk (CTRW) model to describe the FOREX time series.

    Views (last year): 10.
Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"