Результаты поиска по 'second-order method':
Найдено статей: 74
  1. Shestoperov A.I., Ivchenko A.V., Fomina E.V.
    Changepoint detection in biometric data: retrospective nonparametric segmentation methods based on dynamic programming and sliding windows
    Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1295-1321

    This paper is dedicated to the analysis of medical and biological data obtained through locomotor training and testing of astronauts conducted both on Earth and during spaceflight. These experiments can be described as the astronaut’s movement on a treadmill according to a predefined regimen in various speed modes. During these modes, not only the speed is recorded but also a range of parameters, including heart rate, ground reaction force, and others, are collected. In order to analyze the dynamics of the astronaut’s condition over an extended period, it is necessary to perform a qualitative segmentation of their movement modes to independently assess the target metrics. This task becomes particularly relevant in the development of an autonomous life support system for astronauts that operates without direct supervision from Earth. The segmentation of target data is complicated by the presence of various anomalies, such as deviations from the predefined regimen, arbitrary and varying duration of mode transitions, hardware failures, and other factors. The paper includes a detailed review of several contemporary retrospective (offline) nonparametric methods for detecting multiple changepoints, which refer to sudden changes in the properties of the observed time series occurring at unknown moments. Special attention is given to algorithms and statistical measures that determine the homogeneity of the data and methods for detecting change points. The paper considers approaches based on dynamic programming and sliding window methods. The second part of the paper focuses on the numerical modeling of these methods using characteristic examples of experimental data, including both “simple” and “complex” speed profiles of movement. The analysis conducted allowed us to identify the preferred methods, which will be further evaluated on the complete dataset. Preference is given to methods that ensure the closeness of the markup to a reference one, potentially allow the detection of both boundaries of transient processes, as well as are robust relative to internal parameters.

  2. Irkhin I.A., Bulatov V.G., Vorontsov K.V.
    Additive regularizarion of topic models with fast text vectorizartion
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528

    The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.

  3. Andreeva A.A., Anand M., Lobanov A.I., Nikolaev A.V., Panteleev M.A.
    Using extended ODE systems to investigate the mathematical model of the blood coagulation
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 931-951

    Many properties of ordinary differential equations systems solutions are determined by the properties of the equations in variations. An ODE system, which includes both the original nonlinear system and the equations in variations, will be called an extended system further. When studying the properties of the Cauchy problem for the systems of ordinary differential equations, the transition to extended systems allows one to study many subtle properties of solutions. For example, the transition to the extended system allows one to increase the order of approximation for numerical methods, gives the approaches to constructing a sensitivity function without using numerical differentiation procedures, allows to use methods of increased convergence order for the inverse problem solution. Authors used the Broyden method belonging to the class of quasi-Newtonian methods. The Rosenbroke method with complex coefficients was used to solve the stiff systems of the ordinary differential equations. In our case, it is equivalent to the second order approximation method for the extended system.

    As an example of the proposed approach, several related mathematical models of the blood coagulation process were considered. Based on the analysis of the numerical calculations results, the conclusion was drawn that it is necessary to include a description of the factor XI positive feedback loop in the model equations system. Estimates of some reaction constants based on the numerical inverse problem solution were given.

    Effect of factor V release on platelet activation was considered. The modification of the mathematical model allowed to achieve quantitative correspondence in the dynamics of the thrombin production with experimental data for an artificial system. Based on the sensitivity analysis, the hypothesis tested that there is no influence of the lipid membrane composition (the number of sites for various factors of the clotting system, except for thrombin sites) on the dynamics of the process.

  4. Vetrin R.L., Koberg K.
    Reinforcement learning in optimisation of financial market trading strategy parameters
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1793-1812

    High frequency algorithmic trading became is a subclass of trading which is focused on gaining basis-point like profitability on sub-second time frames. Such trading strategies do not depend on most of the factors eligible for the longer-term trading and require specific approach. There were many attempts to utilize machine learning techniques to both high and low frequency trading. However, it is still having limited application in the real world trading due to high exposure to overfitting, requirements for rapid adaptation to new market regimes and overall instability of the results. We conducted a comprehensive research on combination of known quantitative theory and reinforcement learning methods in order derive more effective and robust approach at construction of automated trading system in an attempt to create a support for a known algorithmic trading techniques. Using classical price behavior theories as well as modern application cases in sub-millisecond trading, we utilized the Reinforcement Learning models in order to improve quality of the algorithms. As a result, we derived a robust model which utilize Deep Reinforcement learning in order to optimise static market making trading algorithms’ parameters capable of online learning on live data. More specifically, we explored the system in the derivatives cryptocurrency market which mostly not dependent on external factors in short terms. Our research was implemented in high-frequency environment and the final models showed capability to operate within accepted high-frequency trading time-frames. We compared various combinations of Deep Reinforcement Learning approaches and the classic algorithms and evaluated robustness and effectiveness of improvements for each combination.

Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"