Результаты поиска по 'stochastic':
Найдено статей: 53
  1. Bashkirtseva I.A., Boyarshinova P.V., Ryazanova T.V., Ryashko L.B.
    Analysis of noise-induced destruction of coexistence regimes in «prey–predator» population model
    Computer Research and Modeling, 2016, v. 8, no. 4, pp. 647-660

    The paper is devoted to the analysis of the proximity of the population system to dangerous boundaries. An intersection of these boundaries results in the collapse of the stable coexistence of interacting populations. As a reason of such destruction one can consider random perturbations inevitably presented in any living system. This study is carried out on the example of the well-known model of interaction between predator and prey populations, taking into account both a stabilizing factor of the competition of predators for another than prey resources, and also a destabilizing saturation factor for predators. To describe the saturation of predators, we use the second type Holling trophic function. The dynamics of the system is studied as a function of the predator saturation, and the coefficient of predator competition for resources other than prey. The paper presents a parametric description of the possible dynamic regimes of the deterministic model. Here, local and global bifurcations are studied, and areas of sustainable coexistence of populations in equilibrium and the oscillation modes are described. An interesting feature of this mathematical model, firstly considered by Bazykin, is a global bifurcation of the birth of limit cycle from the separatrix loop. We study the effects of noise on the equilibrium and oscillatory regimes of coexistence of predator and prey populations. It is shown that an increase of the intensity of random disturbances can lead to significant deformations of these regimes right up to their destruction. The aim of this work is to develop a constructive probabilistic criterion for the proximity of the population stochastic system to the dangerous boundaries. The proposed approach is based on the mathematical technique of stochastic sensitivity functions, and the method of confidence domains. In the case of a stable equilibrium, this confidence domain is an ellipse. For the stable cycle, this domain is a confidence band. The size of the confidence domain is proportional to the intensity of the noise and stochastic sensitivity of the initial deterministic attractor. A geometric criterion of the exit of the population system from sustainable coexistence mode is the intersection of the confidence domain and the corresponding separatrix of the unforced deterministic model. An effectiveness of this analytical approach is confirmed by the good agreement of theoretical estimates and results of direct numerical simulations.

    Views (last year): 14. Citations: 4 (RSCI).
  2. Kurushina S.E., Shapovalova E.A.
    Origin and growth of the disorder within an ordered state of the spatially extended chemical reaction model
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 595-607

    We now review the main points of mean-field approximation (MFA) in its application to multicomponent stochastic reaction-diffusion systems.

    We present the chemical reaction model under study — brusselator. We write the kinetic equations of reaction supplementing them with terms that describe the diffusion of the intermediate components and the fluctuations of the concentrations of the initial products. We simulate the fluctuations as random Gaussian homogeneous and spatially isotropic fields with zero means and spatial correlation functions with a non-trivial structure. The model parameter values correspond to a spatially-inhomogeneous ordered state in the deterministic case.

    In the MFA we derive single-site two-dimensional nonlinear self-consistent Fokker–Planck equation in the Stratonovich's interpretation for spatially extended stochastic brusselator, which describes the dynamics of probability distribution density of component concentration values of the system under consideration. We find the noise intensity values appropriate to two types of Fokker–Planck equation solutions: solution with transient bimodality and solution with the multiple alternation of unimodal and bimodal types of probability density. We study numerically the probability density dynamics and time behavior of variances, expectations, and most probable values of component concentrations at various noise intensity values and the bifurcation parameter in the specified region of the problem parameters.

    Beginning from some value of external noise intensity inside the ordered phase disorder originates existing for a finite time, and the higher the noise level, the longer this disorder “embryo” lives. The farther away from the bifurcation point, the lower the noise that generates it and the narrower the range of noise intensity values at which the system evolves to the ordered, but already a new statistically steady state. At some second noise intensity value the intermittency of the ordered and disordered phases occurs. The increasing noise intensity leads to the fact that the order and disorder alternate increasingly.

    Thus, the scenario of the noise induced order–disorder transition in the system under study consists in the intermittency of the ordered and disordered phases.

    Views (last year): 7.
  3. Klenov S.L., Wegerle D., Kerner B.S., Schreckenberg M.
    Prediction of moving and unexpected motionless bottlenecks based on three-phase traffic theory
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 319-363

    We present a simulation methodology for the prediction of ЃgunexpectedЃh bottlenecks, i.e., the bottlenecks that occur suddenly and unexpectedly for drivers on a highway. Such unexpected bottlenecks can be either a moving bottleneck (MB) caused by a slow moving vehicle or a motionless bottleneck caused by a stopped vehicle (SV). Based on simulations of a stochastic microscopic traffic flow model in the framework of KernerЃfs three-phase traffic theory, we show that through the use of a small share of probe vehicles (FCD) randomly distributed in traffic flow the reliable prediction of ЃgunexpectedЃh bottlenecks is possible. We have found that the time dependence of the probability of MB and SV prediction as well as the accuracy of the estimation of MB and SV location depend considerably on sequences of phase transitions from free flow (F) to synchronized flow (S) (F→S transition) and back from synchronized flow to free flow (S→F transition) as well as on speed oscillations in synchronized flow at the bottleneck. In the simulation approach, the identification of F→S and S→F transitions at an unexpected bottleneck has been made in accordance with Kerner's three-phase traffic theory. The presented simulation methodology allows us both the prediction of the unexpected bottleneck that suddenly occurs on a highway and the distinguishing of the origin of the unexpected bottleneck, i.e., whether the unexpected bottleneck has occurred due to a MB or a SV.

  4. Kutalev A.A., Lapina A.A.
    Modern ways to overcome neural networks catastrophic forgetting and empirical investigations on their structural issues
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 45-56

    This paper presents the results of experimental validation of some structural issues concerning the practical use of methods to overcome catastrophic forgetting of neural networks. A comparison of current effective methods like EWC (Elastic Weight Consolidation) and WVA (Weight Velocity Attenuation) is made and their advantages and disadvantages are considered. It is shown that EWC is better for tasks where full retention of learned skills is required on all the tasks in the training queue, while WVA is more suitable for sequential tasks with very limited computational resources, or when reuse of representations and acceleration of learning from task to task is required rather than exact retention of the skills. The attenuation of the WVA method must be applied to the optimization step, i. e. to the increments of neural network weights, rather than to the loss function gradient itself, and this is true for any gradient optimization method except the simplest stochastic gradient descent (SGD). The choice of the optimal weights attenuation function between the hyperbolic function and the exponent is considered. It is shown that hyperbolic attenuation is preferable because, despite comparable quality at optimal values of the hyperparameter of the WVA method, it is more robust to hyperparameter deviations from the optimal value (this hyperparameter in the WVA method provides a balance between preservation of old skills and learning a new skill). Empirical observations are presented that support the hypothesis that the optimal value of this hyperparameter does not depend on the number of tasks in the sequential learning queue. And, consequently, this hyperparameter can be picked up on a small number of tasks and used on longer sequences.

  5. The currently performed mathematical and computer modeling of thermal processes in technical systems is based on an assumption that all the parameters determining thermal processes are fully and unambiguously known and identified (i.e., determined). Meanwhile, experience has shown that parameters determining the thermal processes are of undefined interval-stochastic character, which in turn is responsible for the intervalstochastic nature of thermal processes in the electronic system. This means that the actual temperature values of each element in an technical system will be randomly distributed within their variation intervals. Therefore, the determinative approach to modeling of thermal processes that yields specific values of element temperatures does not allow one to adequately calculate temperature distribution in electronic systems. The interval-stochastic nature of the parameters determining the thermal processes depends on three groups of factors: (a) statistical technological variation of parameters of the elements when manufacturing and assembling the system; (b) the random nature of the factors caused by functioning of an technical system (fluctuations in current and voltage; power, temperatures, and flow rates of the cooling fluid and the medium inside the system); and (c) the randomness of ambient parameters (temperature, pressure, and flow rate). The interval-stochastic indeterminacy of the determinative factors in technical systems is irremediable; neglecting it causes errors when designing electronic systems. A method that allows modeling of unsteady interval-stochastic thermal processes in technical systems (including those upon interval indeterminacy of the determinative parameters) is developed in this paper. The method is based on obtaining and further solving equations for the unsteady statistical measures (mathematical expectations, variances and covariances) of the temperature distribution in an technical system at given variation intervals and the statistical measures of the determinative parameters. Application of the elaborated method to modeling of the interval-stochastic thermal process in a particular electronic system is considered.

    Views (last year): 15. Citations: 6 (RSCI).
  6. Abakumov A.I., Izrailsky Y.G.
    The stabilizing role of fish population structure under the influence of fishery and random environment variations
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 609-620

    We study the influence of fishery on a structured fish population under random changes of habitat conditions. The population parameters correspond to dominant pelagic fish species of Far-Eastern seas of the northwestern part of the Pacific Ocean (pollack, herring, sardine). Similar species inhabit various parts of the Word Ocean. The species body size distribution was chosen as a main population feature. This characteristic is easy to measure and adequately defines main specimen qualities such as age, maturity and other morphological and physiological peculiarities. Environmental fluctuations have a great influence on the individuals in early stages of development and have little influence on the vital activity of mature individuals. The fishery revenue was chosen as an optimality criterion. The main control characteristic is fishing effort. We have chosen quadratic dependence of fishing revenue on the fishing effort according to accepted economic ideas stating that the expenses grow with the production volume. The model study shows that the population structure ensures the increased population stability. The growth and drop out of the individuals’ due to natural mortality smoothens the oscillations of population density arising from the strong influence of the fluctuations of environment on young individuals. The smoothing part is played by diffusion component of the growth processes. The fishery in its turn smooths the fluctuations (including random fluctuations) of the environment and has a substantial impact upon the abundance of fry and the subsequent population dynamics. The optimal time-dependent fishing effort strategy was compared to stationary fishing effort strategy. It is shown that in the case of quickly changing habitat conditions and stochastic dynamics of population replenishment there exists a stationary fishing effort having approximately the same efficiency as an optimal time-dependent fishing effort. This means that a constant or weakly varying fishing effort can be very efficient strategy in terms of revenue.

    Views (last year): 6. Citations: 2 (RSCI).
  7. Krat Y.G., Potapov I.I.
    Bottom stability in closed conduits
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1061-1068

    In this paper on the basis of the riverbed model proposed earlier the one-dimensional stability problem of closed flow channel with sandy bed is solved. The feature of the investigated problem is used original equation of riverbed deformations, which takes into account the influence of mechanical and granulometric bed material characteristics and the bed slope when riverbed analyzing. Another feature of the discussed problem is the consideration together with shear stress influence normal stress influence when investigating the riverbed instability. The analytical dependence determined the wave length of fast-growing bed perturbations is obtained from the solution of the sandy bed stability problem for closed flow channel. The analysis of the obtained analytical dependence is performed. It is shown that the obtained dependence generalizes the row of well-known empirical formulas: Coleman, Shulyak and Bagnold. The structure of the obtained analytical dependence denotes the existence of two hydrodynamic regimes characterized by the Froude number, at which the bed perturbations growth can strongly or weakly depend on the Froude number. Considering a natural stochasticity of the waves movement process and the presence of a definition domain of the solution with a weak dependence on the Froude numbers it can be concluded that the experimental observation of the of the bed waves movement development should lead to the data acquisition with a significant dispersion and it occurs in reality.

    Views (last year): 1. Citations: 2 (RSCI).
  8. Gasnikov A.V., Kubentayeva M.B.
    Searching stochastic equilibria in transport networks by universal primal-dual gradient method
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345

    We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.

    Views (last year): 28.
  9. Dvinskikh D.M., Pirau V.V., Gasnikov A.V.
    On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319

    In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.

    In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.

  10. Bogomolov S.V.
    Stochastic formalization of the gas dynamic hierarchy
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 767-779

    Mathematical models of gas dynamics and its computational industry, in our opinion, are far from perfect. We will look at this problem from the point of view of a clear probabilistic micro-model of a gas from hard spheres, relying on both the theory of random processes and the classical kinetic theory in terms of densities of distribution functions in phase space, namely, we will first construct a system of nonlinear stochastic differential equations (SDE), and then a generalized random and nonrandom integro-differential Boltzmann equation taking into account correlations and fluctuations. The key feature of the initial model is the random nature of the intensity of the jump measure and its dependence on the process itself.

    Briefly recall the transition to increasingly coarse meso-macro approximations in accordance with a decrease in the dimensionalization parameter, the Knudsen number. We obtain stochastic and non-random equations, first in phase space (meso-model in terms of the Wiener — measure SDE and the Kolmogorov – Fokker – Planck equations), and then — in coordinate space (macro-equations that differ from the Navier – Stokes system of equations and quasi-gas dynamics systems). The main difference of this derivation is a more accurate averaging by velocity due to the analytical solution of stochastic differential equations with respect to the Wiener measure, in the form of which an intermediate meso-model in phase space is presented. This approach differs significantly from the traditional one, which uses not the random process itself, but its distribution function. The emphasis is placed on the transparency of assumptions during the transition from one level of detail to another, and not on numerical experiments, which contain additional approximation errors.

    The theoretical power of the microscopic representation of macroscopic phenomena is also important as an ideological support for particle methods alternative to difference and finite element methods.

Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"