All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
On some stochastic mirror descent methods for constrained online optimization problems
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 205-217Views (last year): 42.The problem of online convex optimization naturally occurs in cases when there is an update of statistical information. The mirror descent method is well known for non-smooth optimization problems. Mirror descent is an extension of the subgradient method for solving non-smooth convex optimization problems in the case of a non-Euclidean distance. This paper is devoted to a stochastic variant of recently proposed Mirror Descent methods for convex online optimization problems with convex Lipschitz (generally, non-smooth) functional constraints. This means that we can still use the value of the functional constraint, but instead of (sub)gradient of the objective functional and the functional constraint, we use their stochastic (sub)gradients. More precisely, assume that on a closed subset of $n$-dimensional vector space, $N$ convex Lipschitz non-smooth functionals are given. The problem is to minimize the arithmetic mean of these functionals with a convex Lipschitz constraint. Two methods are proposed, for solving this problem, using stochastic (sub)gradients: adaptive method (does not require knowledge of Lipschitz constant neither for the objective functional, nor for the functional of constraint) and non-adaptivemethod (requires knowledge of Lipschitz constant for the objective functional and the functional of constraint). Note that it is allowed to calculate the stochastic (sub)gradient of each functional only once. In the case of non-negative regret, we find that the number of non-productive steps is $O$($N$), which indicates the optimality of the proposed methods. We consider an arbitrary proximal structure, which is essential for decisionmaking problems. The results of numerical experiments are presented, allowing to compare the work of adaptive and non-adaptive methods for some examples. It is shown that the adaptive method can significantly improve the number of the found solutions.
-
Ellipsoid method for convex stochastic optimization in small dimension
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1137-1147The article considers minimization of the expectation of convex function. Problems of this type often arise in machine learning and a variety of other applications. In practice, stochastic gradient descent (SGD) and similar procedures are usually used to solve such problems. We propose to use the ellipsoid method with mini-batching, which converges linearly and can be more efficient than SGD for a class of problems. This is verified by our experiments, which are publicly available. The algorithm does not require neither smoothness nor strong convexity of the objective to achieve linear convergence. Thus, its complexity does not depend on the conditional number of the problem. We prove that the method arrives at an approximate solution with given probability when using mini-batches of size proportional to the desired accuracy to the power −2. This enables efficient parallel execution of the algorithm, whereas possibilities for batch parallelization of SGD are rather limited. Despite fast convergence, ellipsoid method can result in a greater total number of calls to oracle than SGD, which works decently with small batches. Complexity is quadratic in dimension of the problem, hence the method is suitable for relatively small dimensionalities.
-
Modern ways to overcome neural networks catastrophic forgetting and empirical investigations on their structural issues
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 45-56This paper presents the results of experimental validation of some structural issues concerning the practical use of methods to overcome catastrophic forgetting of neural networks. A comparison of current effective methods like EWC (Elastic Weight Consolidation) and WVA (Weight Velocity Attenuation) is made and their advantages and disadvantages are considered. It is shown that EWC is better for tasks where full retention of learned skills is required on all the tasks in the training queue, while WVA is more suitable for sequential tasks with very limited computational resources, or when reuse of representations and acceleration of learning from task to task is required rather than exact retention of the skills. The attenuation of the WVA method must be applied to the optimization step, i. e. to the increments of neural network weights, rather than to the loss function gradient itself, and this is true for any gradient optimization method except the simplest stochastic gradient descent (SGD). The choice of the optimal weights attenuation function between the hyperbolic function and the exponent is considered. It is shown that hyperbolic attenuation is preferable because, despite comparable quality at optimal values of the hyperparameter of the WVA method, it is more robust to hyperparameter deviations from the optimal value (this hyperparameter in the WVA method provides a balance between preservation of old skills and learning a new skill). Empirical observations are presented that support the hypothesis that the optimal value of this hyperparameter does not depend on the number of tasks in the sequential learning queue. And, consequently, this hyperparameter can be picked up on a small number of tasks and used on longer sequences.
-
The stabilizing role of fish population structure under the influence of fishery and random environment variations
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 609-620Views (last year): 6. Citations: 2 (RSCI).We study the influence of fishery on a structured fish population under random changes of habitat conditions. The population parameters correspond to dominant pelagic fish species of Far-Eastern seas of the northwestern part of the Pacific Ocean (pollack, herring, sardine). Similar species inhabit various parts of the Word Ocean. The species body size distribution was chosen as a main population feature. This characteristic is easy to measure and adequately defines main specimen qualities such as age, maturity and other morphological and physiological peculiarities. Environmental fluctuations have a great influence on the individuals in early stages of development and have little influence on the vital activity of mature individuals. The fishery revenue was chosen as an optimality criterion. The main control characteristic is fishing effort. We have chosen quadratic dependence of fishing revenue on the fishing effort according to accepted economic ideas stating that the expenses grow with the production volume. The model study shows that the population structure ensures the increased population stability. The growth and drop out of the individuals’ due to natural mortality smoothens the oscillations of population density arising from the strong influence of the fluctuations of environment on young individuals. The smoothing part is played by diffusion component of the growth processes. The fishery in its turn smooths the fluctuations (including random fluctuations) of the environment and has a substantial impact upon the abundance of fry and the subsequent population dynamics. The optimal time-dependent fishing effort strategy was compared to stationary fishing effort strategy. It is shown that in the case of quickly changing habitat conditions and stochastic dynamics of population replenishment there exists a stationary fishing effort having approximately the same efficiency as an optimal time-dependent fishing effort. This means that a constant or weakly varying fishing effort can be very efficient strategy in terms of revenue.
-
Searching stochastic equilibria in transport networks by universal primal-dual gradient method
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345Views (last year): 28.We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.
-
On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.
In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.
-
Model for operational optimal control of financial recourses distribution in a company
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 343-358Views (last year): 33.A critical analysis of existing approaches, methods and models to solve the problem of financial resources operational management has been carried out in the article. A number of significant shortcomings of the presented models were identified, limiting the scope of their effective usage. There are a static nature of the models, probabilistic nature of financial flows are not taken into account, daily amounts of receivables and payables that significantly affect the solvency and liquidity of the company are not identified. This necessitates the development of a new model that reflects the essential properties of the planning financial flows system — stochasticity, dynamism, non-stationarity.
The model for the financial flows distribution has been developed. It bases on the principles of optimal dynamic control and provides financial resources planning ensuring an adequate level of liquidity and solvency of a company and concern initial data uncertainty. The algorithm for designing the objective cash balance, based on principles of a companies’ financial stability ensuring under changing financial constraints, is proposed.
Characteristic of the proposed model is the presentation of the cash distribution process in the form of a discrete dynamic process, for which a plan for financial resources allocation is determined, ensuring the extremum of an optimality criterion. Designing of such plan is based on the coordination of payments (cash expenses) with the cash receipts. This approach allows to synthesize different plans that differ in combinations of financial outflows, and then to select the best one according to a given criterion. The minimum total costs associated with the payment of fines for non-timely financing of expenses were taken as the optimality criterion. Restrictions in the model are the requirement to ensure the minimum allowable cash balances for the subperiods of the planning period, as well as the obligation to make payments during the planning period, taking into account the maturity of these payments. The suggested model with a high degree of efficiency allows to solve the problem of financial resources distribution under uncertainty over time and receipts, coordination of funds inflows and outflows. The practical significance of the research is in developed model application, allowing to improve the financial planning quality, to increase the management efficiency and operational efficiency of a company.
-
Comparsion of stochastic approximation and sample average approximation for saddle point problem with bilinear coupling term
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 381-391Stochastic optimization is a current area of research due to significant advances in machine learning and their applications to everyday problems. In this paper, we consider two fundamentally different methods for solving the problem of stochastic optimization — online and offline algorithms. The corresponding algorithms have their qualitative advantages over each other. So, for offline algorithms, it is required to solve an auxiliary problem with high accuracy. However, this can be done in a distributed manner, and this opens up fundamental possibilities such as, for example, the construction of a dual problem. Despite this, both online and offline algorithms pursue a common goal — solving the stochastic optimization problem with a given accuracy. This is reflected in the comparison of the computational complexity of the described algorithms, which is demonstrated in this paper.
The comparison of the described methods is carried out for two types of stochastic problems — convex optimization and saddles. For problems of stochastic convex optimization, the existing solutions make it possible to compare online and offline algorithms in some detail. In particular, for strongly convex problems, the computational complexity of the algorithms is the same, and the condition of strong convexity can be weakened to the condition of $\gamma$-growth of the objective function. From this point of view, saddle point problems are much less studied. Nevertheless, existing solutions allow us to outline the main directions of research. Thus, significant progress has been made for bilinear saddle point problems using online algorithms. Offline algorithms are represented by just one study. In this paper, this example demonstrates the similarity of both algorithms with convex optimization. The issue of the accuracy of solving the auxiliary problem for saddles was also worked out. On the other hand, the saddle point problem of stochastic optimization generalizes the convex one, that is, it is its logical continuation. This is manifested in the fact that existing results from convex optimization can be transferred to saddles. In this paper, such a transfer is carried out for the results of the online algorithm in the convex case, when the objective function satisfies the $\gamma$-growth condition.
-
Stochastic optimization in digital pre-distortion of the signal
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 399-416In this paper, we test the performance of some modern stochastic optimization methods and practices with respect to the digital pre-distortion problem, which is a valuable part of processing signal on base stations providing wireless communication. In the first part of our study, we focus on the search for the best performing method and its proper modifications. In the second part, we propose the new, quasi-online, testing framework that allows us to fit our modeling results with the behavior of real-life DPD prototype, retest some selected of practices considered in the previous section and approve the advantages of the method appearing to be the best under real-life conditions. For the used model, the maximum achieved improvement in depth is 7% in the standard regime and 5% in the online regime (metric itself is of logarithmic scale). We also achieve a halving of the working time preserving 3% and 6% improvement in depth for the standard and online regime, respectively. All comparisons are made to the Adam method, which was highlighted as the best stochastic method for DPD problem in [Pasechnyuk et al., 2021], and to the Adamax method, which is the best in the proposed online regime.
-
Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.
We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.
Keywords: convex optimization, distributed optimization.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"