Результаты поиска по 'ergodicity':
Найдено статей: 3
  1. Malinetsky G.G., Faller D.S.
    Transition to chaos in the «reaction–diffusion» systems. The simplest models
    Computer Research and Modeling, 2014, v. 6, no. 1, pp. 3-12

    The article discusses the emergence of chaotic attractors in the system of three ordinary differential equations arising in the theory of «reaction-diffusion» systems. The dynamics of the corresponding one- and two-dimensional maps and Lyapunov exponents of such attractors are studied. It is shown that the transition to chaos is in accordance with a non-traditional scenario of repeated birth and disappearance of chaotic regimes, which had been previously studied for one-dimensional maps with a sharp apex and a quadratic minimum. Some characteristic features of the system — zones of bistability and hyperbolicity, the crisis of chaotic attractors — are studied by means of numerical analysis.

    Views (last year): 6. Citations: 1 (RSCI).
  2. Rudenko V.D., Yudin N.E., Vasin A.A.
    Survey of convex optimization of Markov decision processes
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 329-353

    This article reviews both historical achievements and modern results in the field of Markov Decision Process (MDP) and convex optimization. This review is the first attempt to cover the field of reinforcement learning in Russian in the context of convex optimization. The fundamental Bellman equation and the criteria of optimality of policy — strategies based on it, which make decisions based on the known state of the environment at the moment, are considered. The main iterative algorithms of policy optimization based on the solution of the Bellman equations are also considered. An important section of this article was the consideration of an alternative to the $Q$-learning approach — the method of direct maximization of the agent’s average reward for the chosen strategy from interaction with the environment. Thus, the solution of this convex optimization problem can be represented as a linear programming problem. The paper demonstrates how the convex optimization apparatus is used to solve the problem of Reinforcement Learning (RL). In particular, it is shown how the concept of strong duality allows us to naturally modify the formulation of the RL problem, showing the equivalence between maximizing the agent’s reward and finding his optimal strategy. The paper also discusses the complexity of MDP optimization with respect to the number of state–action–reward triples obtained as a result of interaction with the environment. The optimal limits of the MDP solution complexity are presented in the case of an ergodic process with an infinite horizon, as well as in the case of a non-stationary process with a finite horizon, which can be restarted several times in a row or immediately run in parallel in several threads. The review also reviews the latest results on reducing the gap between the lower and upper estimates of the complexity of MDP optimization with average remuneration (Averaged MDP, AMDP). In conclusion, the real-valued parametrization of agent policy and a class of gradient optimization methods through maximizing the $Q$-function of value are considered. In particular, a special class of MDPs with restrictions on the value of policy (Constrained Markov Decision Process, CMDP) is presented, for which a general direct-dual approach to optimization with strong duality is proposed.

  3. Fialko N.S., Olshevets M.M., Lakhno V.D.
    Numerical study of the Holstein model in different thermostats
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 489-502

    Based on the Holstein Hamiltonian, the dynamics of the charge introduced into the molecular chain of sites was modeled at different temperatures. In the calculation, the temperature of the chain is set by the initial data ¡ª random Gaussian distributions of velocities and site displacements. Various options for the initial charge density distribution are considered. Long-term calculations show that the system moves to fluctuations near a new equilibrium state. For the same initial velocities and displacements, the average kinetic energy, and, accordingly, the temperature of the T chain, varies depending on the initial distribution of the charge density: it decreases when a polaron is introduced into the chain, or increases if at the initial moment the electronic part of the energy is maximum. A comparison is made with the results obtained previously in the model with a Langevin thermostat. In both cases, the existence of a polaron is determined by the thermal energy of the entire chain.

    According to the simulation results, the transition from the polaron mode to the delocalized state occurs in the same range of thermal energy values of a chain of $N$ sites ~ $NT$ for both thermostat options, with an additional adjustment: for the Hamiltonian system the temperature does not correspond to the initially set one, but is determined after long-term calculations from the average kinetic energy of the chain.

    In the polaron region, the use of different methods for simulating temperature leads to a number of significant differences in the dynamics of the system. In the region of the delocalized state of charge, for high temperatures, the results averaged over a set of trajectories in a system with a random force and the results averaged over time for a Hamiltonian system are close, which does not contradict the ergodic hypothesis. From a practical point of view, for large temperatures T ≈ 300 K, when simulating charge transfer in homogeneous chains, any of these options for setting the thermostat can be used.

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"