All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Searching stochastic equilibria in transport networks by universal primal-dual gradient method
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345Views (last year): 28.We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.
-
On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.
In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.
-
Transport modeling: averaging price matrices
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 317-327This paper considers various approaches to averaging the generalized travel costs calculated for different modes of travel in the transportation network. The mode of transportation is understood to mean both the mode of transport, for example, a car or public transport, and movement without the use of transport, for example, on foot. The task of calculating the trip matrices includes the task of calculating the total matrices, in other words, estimating the total demand for movements by all modes, as well as the task of splitting the matrices according to the mode, also called modal splitting. To calculate trip matrices, gravitational, entropy and other models are used, in which the probability of movement between zones is estimated based on a certain measure of the distance of these zones from each other. Usually, the generalized cost of moving along the optimal path between zones is used as a distance measure. However, the generalized cost of movement differs for different modes of movement. When calculating the total trip matrices, it becomes necessary to average the generalized costs by modes of movement. The averaging procedure is subject to the natural requirement of monotonicity in all arguments. This requirement is not met by some commonly used averaging methods, for example, averaging with weights. The problem of modal splitting is solved by applying the methods of discrete choice theory. In particular, within the framework of the theory of discrete choice, correct methods have been developed for averaging the utility of alternatives that are monotonic in all arguments. The authors propose some adaptation of the methods of the theory of discrete choice for application to the calculation of the average cost of movements in the gravitational and entropy models. The transfer of averaging formulas from the context of the modal splitting model to the trip matrix calculation model requires the introduction of new parameters and the derivation of conditions for the possible value of these parameters, which was done in this article. The issues of recalibration of the gravitational function, which is necessary when switching to a new averaging method, if the existing function is calibrated taking into account the use of the weighted average cost, were also considered. The proposed methods were implemented on the example of a small fragment of the transport network. The results of calculations are presented, demonstrating the advantage of the proposed methods.
-
Numerical solution of integro-differential equations of fractional moisture transfer with the Bessel operator
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 353-373The paper considers integro-differential equations of fractional order moisture transfer with the Bessel operator. The studied equations contain the Bessel operator, two Gerasimov – Caputo fractional differentiation operators with different orders $\alpha$ and $\beta$. Two types of integro-differential equations are considered: in the first case, the equation contains a non-local source, i.e. the integral of the unknown function over the integration variable $x$, and in the second case, the integral over the time variable τ, denoting the memory effect. Similar problems arise in the study of processes with prehistory. To solve differential problems for different ratios of $\alpha$ and $\beta$, a priori estimates in differential form are obtained, from which the uniqueness and stability of the solution with respect to the right-hand side and initial data follow. For the approximate solution of the problems posed, difference schemes are constructed with the order of approximation $O(h^2+\tau^2)$ for $\alpha=\beta$ and $O(h^2+\tau^{2-\max\{\alpha,\beta\}})$ for $\alpha\neq\beta$. The study of the uniqueness, stability and convergence of the solution is carried out using the method of energy inequalities. A priori estimates for solutions of difference problems are obtained for different ratios of $\alpha$ and $\beta$, from which the uniqueness and stability follow, as well as the convergence of the solution of the difference scheme to the solution of the original differential problem at a rate equal to the order of approximation of the difference scheme.
-
Analysis of additive and parametric noise effects on Morris – Lecar neuron model
Computer Research and Modeling, 2017, v. 9, no. 3, pp. 449-468Views (last year): 11.This paper is devoted to the analysis of the effect of additive and parametric noise on the processes occurring in the nerve cell. This study is carried out on the example of the well-known Morris – Lecar model described by the two-dimensional system of ordinary differential equations. One of the main properties of the neuron is the excitability, i.e., the ability to respond to external stimuli with an abrupt change of the electric potential on the cell membrane. This article considers a set of parameters, wherein the model exhibits the class 2 excitability. The dynamics of the system is studied under variation of the external current parameter. We consider two parametric zones: the monostability zone, where a stable equilibrium is the only attractor of the deterministic system, and the bistability zone, characterized by the coexistence of a stable equilibrium and a limit cycle. We show that in both cases random disturbances result in the phenomenon of the stochastic generation of mixed-mode oscillations (i. e., alternating oscillations of small and large amplitudes). In the monostability zone this phenomenon is associated with a high excitability of the system, while in the bistability zone, it occurs due to noise-induced transitions between attractors. This phenomenon is confirmed by changes of probability density functions for distribution of random trajectories, power spectral densities and interspike intervals statistics. The action of additive and parametric noise is compared. We show that under the parametric noise, the stochastic generation of mixed-mode oscillations is observed at lower intensities than under the additive noise. For the quantitative analysis of these stochastic phenomena we propose and apply an approach based on the stochastic sensitivity function technique and the method of confidence domains. In the case of a stable equilibrium, this confidence domain is an ellipse. For the stable limit cycle, this domain is a confidence band. The study of the mutual location of confidence bands and the boundary separating the basins of attraction for different noise intensities allows us to predict the emergence of noise-induced transitions. The effectiveness of this analytical approach is confirmed by the good agreement of theoretical estimations with results of direct numerical simulations.
-
System modeling, risks evaluation and optimization of a distributed computer system
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1349-1359The article deals with the problem of a distributed system operation reliability. The system core is an open integration platform that provides interaction of varied software for modeling gas transportation. Some of them provide an access through thin clients on the cloud technology “software as a service”. Mathematical models of operation, transmission and computing are to ensure the operation of an automated dispatching system for oil and gas transportation. The paper presents a system solution based on the theory of Markov random processes and considers the stable operation stage. The stationary operation mode of the Markov chain with continuous time and discrete states is described by a system of Chapman–Kolmogorov equations with respect to the average numbers (mathematical expectations) of the objects in certain states. The objects of research are both system elements that are present in a large number – thin clients and computing modules, and individual ones – a server, a network manager (message broker). Together, they are interacting Markov random processes. The interaction is determined by the fact that the transition probabilities in one group of elements depend on the average numbers of other elements groups.
The authors propose a multi-criteria dispersion model of risk assessment for such systems (both in the broad and narrow sense, in accordance with the IEC standard). The risk is the standard deviation of estimated object parameter from its average value. The dispersion risk model makes possible to define optimality criteria and whole system functioning risks. In particular, for a thin client, the following is calculated: the loss profit risk, the total risk of losses due to non-productive element states, and the total risk of all system states losses.
Finally the paper proposes compromise schemes for solving the multi-criteria problem of choosing the optimal operation strategy based on the selected set of compromise criteria.
-
The two geometric parameters influence study on the hydrostatic problem solution accuracy by the SPH method
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 979-992The two significant geometric parameters are proposed that affect the physical quantities interpolation in the smoothed particle hydrodynamics method (SPH). They are: the smoothing coefficient which the particle size and the smoothing radius are connecting and the volume coefficient which determine correctly the particle mass for a given particles distribution in the medium.
In paper proposes a technique for these parameters influence assessing on the SPH method interpolations accuracy when the hydrostatic problem solving. The analytical functions of the relative error for the density and pressure gradient in the medium are introduced for the accuracy estimate. The relative error functions are dependent on the smoothing factor and the volume factor. Designating a specific interpolation form in SPH method allows the differential form of the relative error functions to the algebraic polynomial form converting. The root of this polynomial gives the smoothing coefficient values that provide the minimum interpolation error for an assigned volume coefficient.
In this work, the derivation and analysis of density and pressure gradient relative errors functions on a sample of popular nuclei with different smoothing radius was carried out. There is no common the smoothing coefficient value for all the considered kernels that provides the minimum error for both SPH interpolations. The nuclei representatives with different smoothing radius are identified which make it possible the smallest errors of SPH interpolations to provide when the hydrostatic problem solving. As well, certain kernels with different smoothing radius was determined which correct interpolation do not allow provide when the hydrostatic problem solving by the SPH method.
-
Molecular dynamics assessment of the mechanical properties of fibrillar actin
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1081-1092Actin is a conserved structural protein that is expressed in all eukaryotic cells. When polymerized, it forms long filaments of fibrillar actin, or F-actin, which are involved in the formation of the cytoskeleton, in muscle contraction and its regulation, and in many other processes. The dynamic and mechanical properties of actin are important for interaction with other proteins and the realization of its numerous functions in the cell. We performed 204.8 ns long molecular dynamics (MD) simulations of an actin filament segment consisting of 24 monomers in the absence and the presence of MgADP at 300 K in the presence of a solvent and at physiological ionic strength using the AMBER99SBILDN and CHARMM36 force fields in the GROMACS software environment, using modern structural models as the initial structure obtained by high-resolution cryoelectron microscopy. MD calculations have shown that the stationary regime of fluctuations in the structure of the F-actin long segment is developed 80–100 ns after the start of the MD trajectory. Based on the results of MD calculations, the main parameters of the actin helix and its bending, longitudinal, and torsional stiffness were estimated using a section of the calculation model that is far enough away from its ends. The estimated subunit axial (2.72–2.75 nm) and angular (165–168◦) translation of the F-actin helix, its bending (2.8–4.7 · 10−26 N·m2), longitudinal (36–47·10−9 N), and torsional (2.6–3.1·10−26 N·m2) stiffness are in good agreement with the results of the most reliable experiments. The results of MD calculations have shown that modern structural models of F-actin make it possible to accurately describe its dynamics and mechanical properties, provided that computational models contain a sufficiently large number of monomers, modern force fields, and relatively long MD trajectories are used. The inclusion of actin partner proteins, in particular, tropomyosin and troponin, in the MD model can help to understand the molecular mechanisms of such important processes as the regulation of muscle contraction.
-
Survey of convex optimization of Markov decision processes
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 329-353This article reviews both historical achievements and modern results in the field of Markov Decision Process (MDP) and convex optimization. This review is the first attempt to cover the field of reinforcement learning in Russian in the context of convex optimization. The fundamental Bellman equation and the criteria of optimality of policy — strategies based on it, which make decisions based on the known state of the environment at the moment, are considered. The main iterative algorithms of policy optimization based on the solution of the Bellman equations are also considered. An important section of this article was the consideration of an alternative to the $Q$-learning approach — the method of direct maximization of the agent’s average reward for the chosen strategy from interaction with the environment. Thus, the solution of this convex optimization problem can be represented as a linear programming problem. The paper demonstrates how the convex optimization apparatus is used to solve the problem of Reinforcement Learning (RL). In particular, it is shown how the concept of strong duality allows us to naturally modify the formulation of the RL problem, showing the equivalence between maximizing the agent’s reward and finding his optimal strategy. The paper also discusses the complexity of MDP optimization with respect to the number of state–action–reward triples obtained as a result of interaction with the environment. The optimal limits of the MDP solution complexity are presented in the case of an ergodic process with an infinite horizon, as well as in the case of a non-stationary process with a finite horizon, which can be restarted several times in a row or immediately run in parallel in several threads. The review also reviews the latest results on reducing the gap between the lower and upper estimates of the complexity of MDP optimization with average remuneration (Averaged MDP, AMDP). In conclusion, the real-valued parametrization of agent policy and a class of gradient optimization methods through maximizing the $Q$-function of value are considered. In particular, a special class of MDPs with restrictions on the value of policy (Constrained Markov Decision Process, CMDP) is presented, for which a general direct-dual approach to optimization with strong duality is proposed.
-
Consideration of psychological factors in models of the battle (conflict)
Computer Research and Modeling, 2016, v. 8, no. 6, pp. 951-964Views (last year): 7. Citations: 4 (RSCI).The course and outcome of the battle is largely dependent on the morale of the troops, characterized by the percentage of loss in killed and wounded, in which the troops still continue to fight. Every fight is a psychological act of ending his rejection of one of the parties. Typically, models of battle psychological factor taken into account in the decision of Lanchester equations (the condition of equality of forces, when the number of one of the parties becomes zero). It is emphasized that the model Lanchester type satisfactorily describe the dynamics of the battle only in the initial stages. To resolve this contradiction is proposed to use a modification of Lanchester's equations, taking into account the fact that at any moment of the battle on the enemy firing not affected and did not abandon the battle fighters. The obtained differential equations are solved by numerical method and allow the dynamics to take into account the influence of psychological factor and evaluate the completion time of the conflict. Computational experiments confirm the known military theory is the fact that the fight usually ends in refusal of soldiers of one of the parties from its continuation (avoidance of combat in various forms). Along with models of temporal and spatial dynamics proposed to use a modification of the technology features of the conflict of S. Skaperdas, based on the principles of combat. To estimate the probability of victory of one side in the battle takes into account the interest of the maturing sides of the bloody casualties and increased military superiority.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"