Результаты поиска по 'time series':
Найдено статей: 41
  1. Belotelov N.V., Apal’kova T.G., Mamkin V.V., Kurbatova Y.A., Olchev A.V.
    Some relationships between thermodynamic characteristics and water vapor and carbon dioxide fluxes in a recently clear-cut area
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 965-980

    The temporal variability of exergy of short-wave and long-wave radiation and its relationships with sensible heat, water vapor (H2O) and carbon dioxide (CO2) fluxes on a recently clear-cut area in a mixed coniferous and small-leaved forest in the Tver region is discussed. On the basis of the analysis of radiation and exergy efficiency coefficients suggested by Yu.M. Svirezhev it was shown that during the first eight months after clearcutting the forest ecosystem functions as a "heat engine" i.e. the processes of energy dissipation dominated over processes of biomass production. To validate the findings the statistical analysis of temporary variability of meteorological parameters, as well as, daily fluxes of sensible heat, H2O and CO2 was provided using the trigonometrical polynomials. The statistical models that are linearly depended on an exergy of short-wave and long-wave radiation were obtained for mean daily values of CO2 fluxes, gross primary production of regenerated vegetation and sensible heat fluxes. The analysis of these dependences is also confirmed the results obtained from processing the radiation and exergy efficiency coefficients. The splitting the time series into separate time intervals, e.g. “spring–summer” and “summer–autumn”, allowed revealing that the statistically significant relationships between atmospheric fluxes and exergy were amplified in summer months as the clear-cut area was overgrown by grassy and young woody vegetation. The analysis of linear relationships between time-series of latent heat fluxes and exergy showed their statistical insignificance. The linear relationships between latent heat fluxes and temperature were in turn statistically significant. The air temperature was a key factor improving the accuracy of the models, whereas effect of exergy was insignificant. The results indicated that at the time of active vegetation regeneration within the clear-cut area the seasonal variability of surface evaporation is mainly governed by temperature variation.

    Views (last year): 15. Citations: 1 (RSCI).
  2. Sokolov A.V., Mamkin V.V., Avilov V.K., Tarasov D.L., Kurbatova Y.A., Olchev A.V.
    Application of a balanced identification method for gap-filling in CO2 flux data in a sphagnum peat bog
    Computer Research and Modeling, 2019, v. 11, no. 1, pp. 153-171

    The method of balanced identification was used to describe the response of Net Ecosystem Exchange of CO2 (NEE) to change of environmental factors, and to fill the gaps in continuous CO2 flux measurements in a sphagnum peat bog in the Tver region. The measurements were provided in the peat bog by the eddy covariance method from August to November of 2017. Due to rainy weather conditions and recurrent periods with low atmospheric turbulence the gap proportion in measured CO2 fluxes at our experimental site during the entire period of measurements exceeded 40%. The model developed for the gap filling in long-term experimental data considers the NEE as a difference between Ecosystem Respiration (RE) and Gross Primary Production (GPP), i.e. key processes of ecosystem functioning, and their dependence on incoming solar radiation (Q), soil temperature (T), water vapor pressure deficit (VPD) and ground water level (WL). Applied for this purpose the balanced identification method is based on the search for the optimal ratio between the model simplicity and the data fitting accuracy — the ratio providing the minimum of the modeling error estimated by the cross validation method. The obtained numerical solutions are characterized by minimum necessary nonlinearity (curvature) that provides sufficient interpolation and extrapolation characteristics of the developed models. It is particularly important to fill the missing values in NEE measurements. Reviewing the temporary variability of NEE and key environmental factors allowed to reveal a statistically significant dependence of GPP on Q, T, and VPD, and RE — on T and WL, respectively. At the same time, the inaccuracy of applied method for simulation of the mean daily NEE, was less than 10%, and the error in NEE estimates by the method was higher than by the REddyProc model considering the influence on NEE of fewer number of environmental parameters. Analyzing the gap-filled time series of NEE allowed to derive the diurnal and inter-daily variability of NEE and to obtain cumulative CO2 fluxs in the peat bog for selected summer-autumn period. It was shown, that the rate of CO2 fixation by peat bog vegetation in August was significantly higher than the rate of ecosystem respiration, while since September due to strong decrease of GPP the peat bog was turned into a consistent source of CO2 for the atmosphere.

    Views (last year): 19.
  3. Koganov A.V., Rakcheeva T.A., Prikhodko D.I.
    Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327

    The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.

    Views (last year): 16.
  4. Methi G., Kumar A.
    Numerical Solution of Linear and Higher-order Delay Differential Equations using the Coded Differential Transform Method
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1091-1099

    The aim of the paper is to obtain a numerical solution for linear and higher-order delay differential equations (DDEs) using the coded differential transform method (CDTM). The CDTM is developed and applied to delay problems to show the efficiency of the proposed method. The coded differential transform method is a combination of the differential transform method and Mathematica software. We construct recursive relations for a few delay problems, which results in simultaneous equations, and solve them to obtain various series solution terms using the coded differential transform method. The numerical solution obtained by CDTM is compared with an exact solution. Numerical results and error analysis are presented for delay differential equations to show that the proposed method is suitable for solving delay differential equations. It is established that the delay differential equations under discussion are solvable in a specific domain. The error between the CDTM solution and the exact solution becomes very small if more terms are included in the series solution. The coded differential transform method reduces complex calculations, avoids discretization, linearization, and saves calculation time. In addition, it is easy to implement and robust. Error analysis shows that CDTM is consistent and converges fast. We obtain more accurate results using the coded differential transform method as compared to other methods.

  5. This article solves the problem of developing a technology for collecting initial data for building models for assessing the functional state of a person. This condition is assessed by the pupil response of a person to a change in illumination based on the pupillometry method. This method involves the collection and analysis of initial data (pupillograms), presented in the form of time series characterizing the dynamics of changes in the human pupils to a light impulse effect. The drawbacks of the traditional approach to the collection of initial data using the methods of computer vision and smoothing of time series are analyzed. Attention is focused on the importance of the quality of the initial data for the construction of adequate mathematical models. The need for manual marking of the iris and pupil circles is updated to improve the accuracy and quality of the initial data. The stages of the proposed technology for collecting initial data are described. An example of the obtained pupillogram is given, which has a smooth shape and does not contain outliers, noise, anomalies and missing values. Based on the presented technology, a software and hardware complex has been developed, which is a collection of special software with two main modules, and hardware implemented on the basis of a Raspberry Pi 4 Model B microcomputer, with peripheral equipment that implements the specified functionality. To evaluate the effectiveness of the developed technology, models of a single-layer perspetron and a collective of neural networks are used, for the construction of which the initial data on the functional state of intoxication of a person were used. The studies have shown that the use of manual marking of the initial data (in comparison with automatic methods of computer vision) leads to a decrease in the number of errors of the 1st and 2nd years of the kind and, accordingly, to an increase in the accuracy of assessing the functional state of a person. Thus, the presented technology for collecting initial data can be effectively used to build adequate models for assessing the functional state of a person by pupillary response to changes in illumination. The use of such models is relevant in solving individual problems of ensuring transport security, in particular, monitoring the functional state of drivers.

  6. Aristov V.V., Stroganov A.V., Yastrebov A.D.
    Application of the kinetic type model for study of a spatial spread of COVID-19
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 611-627

    A simple model based on a kinetic-type equation is proposed to describe the spread of a virus in space through the migration of virus carriers from a certain center. The consideration is carried out on the example of three countries for which such a one-dimensional model is applicable: Russia, Italy and Chile. The geographical location of these countries and their elongation in the direction from the centers of infection (Moscow, Milan and Lombardia in general, as well as Santiago, respectively) makes it possible to use such an approximation. The aim is to determine the dynamic density of the infected in time and space. The model is two-parameter. The first parameter is the value of the average spreading rate associated with the transfer of infected moving by transport vehicles. The second parameter is the frequency of the decrease of the infected as they move through the country, which is associated with the passengers reaching their destination, as well as with quarantine measures. The parameters are determined from the actual known data for the first days of the spatial spread of the epidemic. An analytical solution is being built; simple numerical methods are also used to obtain a series of calculations. The geographical spread of the disease is a factor taken into account in the model, the second important factor is that contact infection in the field is not taken into account. Therefore, the comparison of the calculated values with the actual data in the initial period of infection coincides with the real data, then these data become higher than the model data. Those no less model calculations allow us to make some predictions. In addition to the speed of infection, a similar “speed of recovery” is possible. When such a speed is found for the majority of the country's population, a conclusion is made about the beginning of a global recovery, which coincides with real data.

  7. Grinevich A.A., Yakushevich L.V.
    On the computer experiments of Kasman
    Computer Research and Modeling, 2019, v. 11, no. 3, pp. 503-513

    In 2007 Kasman conducted a series of original computer experiments with sine-Gordon kinks moving along artificial DNA sequences. Two sequences were considered. Each consisted of two parts separated by a boundary. The left part of the first sequence contained repeating TTA triplets that encode leucines, and the right part contained repeating CGC triplets that encode arginines. In the second sequence, the left part contained repeating CTG triplets encoding leucines, and the right part contained repeating AGA triplets encoding arginines. When modeling the kink movement, an interesting effect was discovered. It turned out that the kink, moving in one of the sequences, stopped without reaching the end of the sequence, and then “bounced off” as if he had hit a wall. At the same time, the kink movement in the other sequence did not stop during the entire time of the experiment. In these computer experiments, however, a simple DNA model proposed by Salerno was used. It takes into account differences in the interactions of complementary bases within pairs, but does not take into account differences in the moments of inertia of nitrogenous bases and in the distances between the centers of mass of the bases and the sugar-phosphate chain. The question of whether the Kasman effect will continue with the use of more accurate DNA models is still open. In this paper, we investigate the Kasman effect on the basis of a more accurate DNA model that takes both of these differences into account. We obtained the energy profiles of Kasman's sequences and constructed the trajectories of the motion of kinks launched in these sequences with different initial values of the energy. The results of our investigations confirmed the existence of the Kasman effect, but only in a limited interval of initial values of the kink energy and with a certain direction of the kinks movement. In other cases, this effect did not observe. We discussed which of the studied sequences were energetically preferable for the excitation and propagation of kinks.

    Views (last year): 23.
  8. Vornovskikh P.A., Kim A., Prokhorov I.V.
    The applicability of the approximation of single scattering in pulsed sensing of an inhomogeneous medium
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1063-1079

    The mathematical model based on the linear integro-differential Boltzmann equation is considered in this article. The model describes the radiation transfer in the scattering medium irradiated by a point source. The inverse problem for the transfer equation is defined. This problem consists of determining the scattering coefficient from the time-angular distribution of the radiation flux density at a given point in space. The Neumann series representation for solving the radiation transfer equation is analyzed in the study of the inverse problem. The zero member of the series describes the unscattered radiation, the first member of the series describes a single-scattered field, the remaining members of the series describe a multiple-scattered field. When calculating the approximate solution of the radiation transfer equation, the single scattering approximation is widespread to calculated an approximate solution of the equation for regions with a small optical thickness and a low level of scattering. An analytical formula is obtained for finding the scattering coefficient by using this approximation for problem with additional restrictions on the initial data. To verify the adequacy of the obtained formula the Monte Carlo weighted method for solving the transfer equation is constructed and software implemented taking into account multiple scattering in the medium and the space-time singularity of the radiation source. As applied to the problems of high-frequency acoustic sensing in the ocean, computational experiments were carried out. The application of the single scattering approximation is justified, at least, at a sensing range of about one hundred meters and the double and triple scattered fields make the main impact on the formula error. For larger regions, the single scattering approximation gives at the best only a qualitative evaluation of the medium structure, sometimes it even does not allow to determine the order of the parameters quantitative characteristics of the interaction of radiation with matter.

  9. Dmitriev A.V., Markov N.V.
    Double layer interval weighted graphs in assessing the market risks
    Computer Research and Modeling, 2014, v. 6, no. 1, pp. 159-166

    This scientific work is dedicated to applying of two-layer interval weighted graphs in nonstationary time series forecasting and evaluation of market risks. The first layer of the graph, formed with the primary system training, displays potential system fluctuations at the time of system training. Interval vertexes of the second layer of the graph (the superstructure of the first layer) which display the degree of time series modeling error are connected with the first layer by edges. The proposed model has been approved by the 90-day forecast of steel billets. The average forecast error amounts 2,6 % (it’s less than the average forecast error of the autoregression models).

    Views (last year): 2. Citations: 1 (RSCI).
  10. Silaeva V.A., Silaeva M.V., Silaev A.M.
    Estimation of models parameters for time series with Markov switching regimes
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 903-918

    The paper considers the problem of estimating the parameters of time series described by regression models with Markov switching of two regimes at random instants of time with independent Gaussian noise. For the solution, we propose a variant of the EM algorithm based on the iterative procedure, during which an estimation of the regression parameters is performed for a given sequence of regime switching and an evaluation of the switching sequence for the given parameters of the regression models. In contrast to the well-known methods of estimating regression parameters in the models with Markov switching, which are based on the calculation of a posteriori probabilities of discrete states of the switching sequence, in the paper the estimates are calculated of the switching sequence, which are optimal by the criterion of the maximum of a posteriori probability. As a result, the proposed algorithm turns out to be simpler and requires less calculations. Computer modeling allows to reveal the factors influencing accuracy of estimation. Such factors include the number of observations, the number of unknown regression parameters, the degree of their difference in different modes of operation, and the signal-to-noise ratio which is associated with the coefficient of determination in regression models. The proposed algorithm is applied to the problem of estimating parameters in regression models for the rate of daily return of the RTS index, depending on the returns of the S&P 500 index and Gazprom shares for the period from 2013 to 2018. Comparison of the estimates of the parameters found using the proposed algorithm is carried out with the estimates that are formed using the EViews econometric package and with estimates of the ordinary least squares method without taking into account regimes switching. The account of regimes switching allows to receive more exact representation about structure of a statistical dependence of investigated variables. In switching models, the increase in the signal-to-noise ratio leads to the fact that the differences in the estimates produced by the proposed algorithm and using the EViews program are reduced.

    Views (last year): 36.
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"