All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Modeling time series trajectories using the Liouville equation
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 585-598This paper presents algorithm for modeling set of trajectories of non-stationary time series, based on a numerical scheme for approximating the sample density of the distribution function in a problem with fixed ends, when the initial distribution for a given number of steps transforms into a certain final distribution, so that at each step the semigroup property of solving the Liouville equation is satisfied. The model makes it possible to numerically construct evolving densities of distribution functions during random switching of states of the system generating the original time series.
The main problem is related to the fact that with the numerical implementation of the left-hand differential derivative in time, the solution becomes unstable, but such approach corresponds to the modeling of evolution. An integrative approach is used while choosing implicit stable schemes with “going into the future”, this does not match the semigroup property at each step. If, on the other hand, some real process is being modeled, in which goal-setting presumably takes place, then it is desirable to use schemes that generate a model of the transition process. Such model is used in the future in order to build a predictor of the disorder, which will allow you to determine exactly what state the process under study is going into, before the process really went into it. The model described in the article can be used as a tool for modeling real non-stationary time series.
Steps of the modeling scheme are described further. Fragments corresponding to certain states are selected from a given time series, for example, trends with specified slope angles and variances. Reference distributions of states are compiled from these fragments. Then the empirical distributions of the duration of the system’s stay in the specified states and the duration of the transition time from state to state are determined. In accordance with these empirical distributions, a probabilistic model of the disorder is constructed and the corresponding trajectories of the time series are modeled.
-
Origin and growth of the disorder within an ordered state of the spatially extended chemical reaction model
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 595-607Views (last year): 7.We now review the main points of mean-field approximation (MFA) in its application to multicomponent stochastic reaction-diffusion systems.
We present the chemical reaction model under study — brusselator. We write the kinetic equations of reaction supplementing them with terms that describe the diffusion of the intermediate components and the fluctuations of the concentrations of the initial products. We simulate the fluctuations as random Gaussian homogeneous and spatially isotropic fields with zero means and spatial correlation functions with a non-trivial structure. The model parameter values correspond to a spatially-inhomogeneous ordered state in the deterministic case.
In the MFA we derive single-site two-dimensional nonlinear self-consistent Fokker–Planck equation in the Stratonovich's interpretation for spatially extended stochastic brusselator, which describes the dynamics of probability distribution density of component concentration values of the system under consideration. We find the noise intensity values appropriate to two types of Fokker–Planck equation solutions: solution with transient bimodality and solution with the multiple alternation of unimodal and bimodal types of probability density. We study numerically the probability density dynamics and time behavior of variances, expectations, and most probable values of component concentrations at various noise intensity values and the bifurcation parameter in the specified region of the problem parameters.
Beginning from some value of external noise intensity inside the ordered phase disorder originates existing for a finite time, and the higher the noise level, the longer this disorder “embryo” lives. The farther away from the bifurcation point, the lower the noise that generates it and the narrower the range of noise intensity values at which the system evolves to the ordered, but already a new statistically steady state. At some second noise intensity value the intermittency of the ordered and disordered phases occurs. The increasing noise intensity leads to the fact that the order and disorder alternate increasingly.
Thus, the scenario of the noise induced order–disorder transition in the system under study consists in the intermittency of the ordered and disordered phases.
-
Variance reduction for minimax problems with a small dimension of one of the variables
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.
-
Mathematical modeling of the interval stochastic thermal processes in technical systems at the interval indeterminacy of the determinative parameters
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 501-520Views (last year): 15. Citations: 6 (RSCI).The currently performed mathematical and computer modeling of thermal processes in technical systems is based on an assumption that all the parameters determining thermal processes are fully and unambiguously known and identified (i.e., determined). Meanwhile, experience has shown that parameters determining the thermal processes are of undefined interval-stochastic character, which in turn is responsible for the intervalstochastic nature of thermal processes in the electronic system. This means that the actual temperature values of each element in an technical system will be randomly distributed within their variation intervals. Therefore, the determinative approach to modeling of thermal processes that yields specific values of element temperatures does not allow one to adequately calculate temperature distribution in electronic systems. The interval-stochastic nature of the parameters determining the thermal processes depends on three groups of factors: (a) statistical technological variation of parameters of the elements when manufacturing and assembling the system; (b) the random nature of the factors caused by functioning of an technical system (fluctuations in current and voltage; power, temperatures, and flow rates of the cooling fluid and the medium inside the system); and (c) the randomness of ambient parameters (temperature, pressure, and flow rate). The interval-stochastic indeterminacy of the determinative factors in technical systems is irremediable; neglecting it causes errors when designing electronic systems. A method that allows modeling of unsteady interval-stochastic thermal processes in technical systems (including those upon interval indeterminacy of the determinative parameters) is developed in this paper. The method is based on obtaining and further solving equations for the unsteady statistical measures (mathematical expectations, variances and covariances) of the temperature distribution in an technical system at given variation intervals and the statistical measures of the determinative parameters. Application of the elaborated method to modeling of the interval-stochastic thermal process in a particular electronic system is considered.
-
Сhaotic flow evolution arising in a body force field
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 883-912This article presents the results of an analytical and computer study of the chaotic evolution of a regular velocity field generated by a large-scale harmonic forcing. The authors obtained an analytical solution for the flow stream function and its derivative quantities (velocity, vorticity, kinetic energy, enstrophy and palinstrophy). Numerical modeling of the flow evolution was carried out using the OpenFOAM software package based on incompressible model, as well as two inhouse implementations of CABARET and McCormack methods employing nearly incompressible formulation. Calculations were carried out on a sequence of nested meshes with 642, 1282, 2562, 5122, 10242 cells for two characteristic (asymptotic) Reynolds numbers characterizing laminar and turbulent evolution of the flow, respectively. Simulations show that blow-up of the analytical solution takes place in both cases. The energy characteristics of the flow are discussed relying upon the energy curves as well as the dissipation rates. For the fine mesh, this quantity turns out to be several orders of magnitude less than its hydrodynamic (viscous) counterpart. Destruction of the regular flow structure is observed for any of the numerical methods, including at the late stages of laminar evolution, when numerically obtained distributions are close to analytics. It can be assumed that the prerequisite for the development of instability is the error accumulated during the calculation process. This error leads to unevenness in the distribution of vorticity and, as a consequence, to the variance vortex intensity and finally leads to chaotization of the flow. To study the processes of vorticity production, we used two integral vorticity-based quantities — integral enstrophy ($\zeta$) and palinstrophy $(P)$. The formulation of the problem with periodic boundary conditions allows us to establish a simple connection between these quantities. In addition, $\zeta$ can act as a measure of the eddy resolution of the numerical method, and palinstrophy determines the degree of production of small-scale vorticity.
Keywords: turbulence, vorticity, enstrophy, palinstrophy, dissipation rate, CABARET scheme, McCormack scheme, OpenFOAM. -
Cluster method of mathematical modeling of interval-stochastic thermal processes in electronic systems
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1023-1038A cluster method of mathematical modeling of interval-stochastic thermal processes in complex electronic systems (ES), is developed. In the cluster method, the construction of a complex ES is represented in the form of a thermal model, which is a system of clusters, each of which contains a core that combines the heat-generating elements falling into a given cluster, the cluster shell and a medium flow through the cluster. The state of the thermal process in each cluster and every moment of time is characterized by three interval-stochastic state variables, namely, the temperatures of the core, shell, and medium flow. The elements of each cluster, namely, the core, shell, and medium flow, are in thermal interaction between themselves and elements of neighboring clusters. In contrast to existing methods, the cluster method allows you to simulate thermal processes in complex ESs, taking into account the uneven distribution of temperature in the medium flow pumped into the ES, the conjugate nature of heat exchange between the medium flow in the ES, core and shells of clusters, and the intervalstochastic nature of thermal processes in the ES, caused by statistical technological variation in the manufacture and installation of electronic elements in ES and random fluctuations in the thermal parameters of the environment. The mathematical model describing the state of thermal processes in a cluster thermal model is a system of interval-stochastic matrix-block equations with matrix and vector blocks corresponding to the clusters of the thermal model. The solution to the interval-stochastic equations are statistical measures of the state variables of thermal processes in clusters - mathematical expectations, covariances between state variables and variance. The methodology for applying the cluster method is shown on the example of a real ES.
-
On a possible approach to a sport game with discrete time simulation
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 271-279Views (last year): 9.The paper proposes an approach to simulation of a sport game, consisting of a discrete set of separate competitions. According to this approach, such a competition is considered as a random processes, generally — a non-Markov’s one. At first we treat the flow of the game as a Markov’s process, obtaining recursive relationship between the probabilities of achieving certain states of score in a tennis match, as well as secondary indicators of the game, such as expectation and variance of the number of serves to finish the game. Then we use a simulation system, modeling the match, to allow an arbitrary change of the probabilities of the outcomes in the competitions that compose the match. We, for instance, allow the probabilities to depend on the results of previous competitions. Therefore, this paper deals with a modification of the model, previously proposed by the authors for sports games with continuous time.
The proposed approach allows to evaluate not only the probability of the final outcome of the match, but also the probabilities of reaching each of the possible intermediate results, as well as secondary indicators of the game, such as the number of separate competitions it takes to finish the match. The paper includes a detailed description of the construction of a simulation system for a game of a tennis match. Then we consider simulating a set and the whole tennis match by analogy. We show some statements concerning fairness of tennis serving rules, understood as independence of the outcome of a competition on the right to serve first. We perform simulation of a cancelled ATP series match, obtaining its most probable intermediate and final outcomes for three different possible variants of the course of the match.
The main result of this paper is the developed method of simulation of the match, applicable not only to tennis, but also to other types of sports games with discrete time.
-
Estimation of models parameters for time series with Markov switching regimes
Computer Research and Modeling, 2018, v. 10, no. 6, pp. 903-918Views (last year): 36.The paper considers the problem of estimating the parameters of time series described by regression models with Markov switching of two regimes at random instants of time with independent Gaussian noise. For the solution, we propose a variant of the EM algorithm based on the iterative procedure, during which an estimation of the regression parameters is performed for a given sequence of regime switching and an evaluation of the switching sequence for the given parameters of the regression models. In contrast to the well-known methods of estimating regression parameters in the models with Markov switching, which are based on the calculation of a posteriori probabilities of discrete states of the switching sequence, in the paper the estimates are calculated of the switching sequence, which are optimal by the criterion of the maximum of a posteriori probability. As a result, the proposed algorithm turns out to be simpler and requires less calculations. Computer modeling allows to reveal the factors influencing accuracy of estimation. Such factors include the number of observations, the number of unknown regression parameters, the degree of their difference in different modes of operation, and the signal-to-noise ratio which is associated with the coefficient of determination in regression models. The proposed algorithm is applied to the problem of estimating parameters in regression models for the rate of daily return of the RTS index, depending on the returns of the S&P 500 index and Gazprom shares for the period from 2013 to 2018. Comparison of the estimates of the parameters found using the proposed algorithm is carried out with the estimates that are formed using the EViews econometric package and with estimates of the ordinary least squares method without taking into account regimes switching. The account of regimes switching allows to receive more exact representation about structure of a statistical dependence of investigated variables. In switching models, the increase in the signal-to-noise ratio leads to the fact that the differences in the estimates produced by the proposed algorithm and using the EViews program are reduced.
-
On Accelerated Methods for Saddle-Point Problems with Composite Structure
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 433-467We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and dual variables. First, we consider such problems with smooth composite terms, one of which has finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for noncomposite setting. Besides, our algorithms allow one to separate the complexity bounds, i. e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.
-
Statistically fair price for the European call options according to the discreet mean/variance model
Computer Research and Modeling, 2014, v. 6, no. 5, pp. 861-874Views (last year): 1.We consider a portfolio with call option and the corresponding underlying asset under the standard assumption that stock-market price represents a random variable with lognormal distribution. Minimizing the variance hedging risk of the portfolio on the date of maturity of the call option we find a fraction of the asset per unit call option. As a direct consequence we derive the statistically fair lookback call option price in explicit form. In contrast to the famous Black–Scholes theory, any portfolio cannot be regarded as risk-free because no additional transactions are supposed to be conducted over the life of the contract, but the sequence of independent portfolios will reduce risk to zero asymptotically. This property is illustrated in the experimental section using a dataset of daily stock prices of 37 leading US-based companies for the period from April 2006 to January 2013.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"