All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Using feedback functions to solve parametric programming problems
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1125-1151We consider a finite-dimensional optimization problem, the formulation of which in addition to the required variables contains parameters. The solution to this problem is a dependence of optimal values of variables on parameters. In general, these dependencies are not functions because they can have ambiguous meanings and in the functional case be nondifferentiable. In addition, their domain of definition may be narrower than the domains of definition of functions in the condition of the original problem. All these properties make it difficult to solve both the original parametric problem and other tasks, the statement of which includes these dependencies. To overcome these difficulties, usually methods such as non-differentiable optimization are used.
This article proposes an alternative approach that makes it possible to obtain solutions to parametric problems in a form devoid of the specified properties. It is shown that such representations can be explored using standard algorithms, based on the Taylor formula. This form is a function smoothly approximating the solution of the original problem for any parameter values, specified in its statement. In this case, the value of the approximation error is controlled by a special parameter. Construction of proposed approximations is performed using special functions that establish feedback (within optimality conditions for the original problem) between variables and Lagrange multipliers. This method is described for linear problems with subsequent generalization to the nonlinear case.
From a computational point of view the construction of the approximation consists in finding the saddle point of the modified Lagrange function of the original problem. Moreover, this modification is performed in a special way using feedback functions. It is shown that the necessary conditions for the existence of such a saddle point are similar to the conditions of the Karush – Kuhn – Tucker theorem, but do not contain constraints such as inequalities and conditions of complementary slackness. Necessary conditions for the existence of a saddle point determine this approximation implicitly. Therefore, to calculate its differential characteristics, the implicit function theorem is used. The same theorem is used to reduce the approximation error to an acceptable level.
Features of the practical implementation feedback function method, including estimates of the rate of convergence to the exact solution are demonstrated for several specific classes of parametric optimization problems. Specifically, tasks searching for the global extremum of functions of many variables and the problem of multiple extremum (maximin-minimax) are considered. Optimization problems that arise when using multicriteria mathematical models are also considered. For each of these classes, there are demo examples.
-
Variance reduction for minimax problems with a small dimension of one of the variables
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.
-
On Accelerated Methods for Saddle-Point Problems with Composite Structure
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 433-467We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and dual variables. First, we consider such problems with smooth composite terms, one of which has finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for noncomposite setting. Besides, our algorithms allow one to separate the complexity bounds, i. e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"