Результаты поиска по 'saddle point problem':
Найдено статей: 10
  1. Alkousa M.S., Gasnikov A.V., Dvurechensky P.E., Sadiev A.A., Razouk L.Ya.
    An approach for the nonconvex uniformly concave structured saddle point problem
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 225-237

    Recently, saddle point problems have received much attention due to their powerful modeling capability for a lot of problems from diverse domains. Applications of these problems occur in many applied areas, such as robust optimization, distributed optimization, game theory, and many applications in machine learning such as empirical risk minimization and generative adversarial networks training. Therefore, many researchers have actively worked on developing numerical methods for solving saddle point problems in many different settings. This paper is devoted to developing a numerical method for solving saddle point problems in the nonconvex uniformly-concave setting. We study a general class of saddle point problems with composite structure and H\"older-continuous higher-order derivatives. To solve the problem under consideration, we propose an approach in which we reduce the problem to a combination of two auxiliary optimization problems separately for each group of variables, the outer minimization problem w.r.t. primal variables, and the inner maximization problem w.r.t the dual variables. For solving the outer minimization problem, we use the Adaptive Gradient Method, which is applicable for nonconvex problems and also works with an inexact oracle that is generated by approximately solving the inner problem. For solving the inner maximization problem, we use the Restarted Unified Acceleration Framework, which is a framework that unifies the high-order acceleration methods for minimizing a convex function that has H\"older-continuous higher-order derivatives. Separate complexity bounds are provided for the number of calls to the first-order oracles for the outer minimization problem and higher-order oracles for the inner maximization problem. Moreover, the complexity of the whole proposed approach is then estimated.

  2. Umnov A.E., Umnov E.A.
    Using feedback functions to solve parametric programming problems
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1125-1151

    We consider a finite-dimensional optimization problem, the formulation of which in addition to the required variables contains parameters. The solution to this problem is a dependence of optimal values of variables on parameters. In general, these dependencies are not functions because they can have ambiguous meanings and in the functional case be nondifferentiable. In addition, their domain of definition may be narrower than the domains of definition of functions in the condition of the original problem. All these properties make it difficult to solve both the original parametric problem and other tasks, the statement of which includes these dependencies. To overcome these difficulties, usually methods such as non-differentiable optimization are used.

    This article proposes an alternative approach that makes it possible to obtain solutions to parametric problems in a form devoid of the specified properties. It is shown that such representations can be explored using standard algorithms, based on the Taylor formula. This form is a function smoothly approximating the solution of the original problem for any parameter values, specified in its statement. In this case, the value of the approximation error is controlled by a special parameter. Construction of proposed approximations is performed using special functions that establish feedback (within optimality conditions for the original problem) between variables and Lagrange multipliers. This method is described for linear problems with subsequent generalization to the nonlinear case.

    From a computational point of view the construction of the approximation consists in finding the saddle point of the modified Lagrange function of the original problem. Moreover, this modification is performed in a special way using feedback functions. It is shown that the necessary conditions for the existence of such a saddle point are similar to the conditions of the Karush – Kuhn – Tucker theorem, but do not contain constraints such as inequalities and conditions of complementary slackness. Necessary conditions for the existence of a saddle point determine this approximation implicitly. Therefore, to calculate its differential characteristics, the implicit function theorem is used. The same theorem is used to reduce the approximation error to an acceptable level.

    Features of the practical implementation feedback function method, including estimates of the rate of convergence to the exact solution are demonstrated for several specific classes of parametric optimization problems. Specifically, tasks searching for the global extremum of functions of many variables and the problem of multiple extremum (maximin-minimax) are considered. Optimization problems that arise when using multicriteria mathematical models are also considered. For each of these classes, there are demo examples.

  3. Gladin E.L., Borodich E.D.
    Variance reduction for minimax problems with a small dimension of one of the variables
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275

    The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.

  4. Dvinskikh D.M., Pirau V.V., Gasnikov A.V.
    On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319

    In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.

    In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.

  5. Ostroukhov P.A., Kamalov R.A., Dvurechensky P.E., Gasnikov A.V.
    Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 357-376

    In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.

    For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.

    Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our aforementioned algorithm with a global linear convergence and a local quadratic convergence rate.

    Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.

  6. Skorik S.N., Pirau V.V., Sedov S.A., Dvinskikh D.M.
    Comparsion of stochastic approximation and sample average approximation for saddle point problem with bilinear coupling term
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 381-391

    Stochastic optimization is a current area of research due to significant advances in machine learning and their applications to everyday problems. In this paper, we consider two fundamentally different methods for solving the problem of stochastic optimization — online and offline algorithms. The corresponding algorithms have their qualitative advantages over each other. So, for offline algorithms, it is required to solve an auxiliary problem with high accuracy. However, this can be done in a distributed manner, and this opens up fundamental possibilities such as, for example, the construction of a dual problem. Despite this, both online and offline algorithms pursue a common goal — solving the stochastic optimization problem with a given accuracy. This is reflected in the comparison of the computational complexity of the described algorithms, which is demonstrated in this paper.

    The comparison of the described methods is carried out for two types of stochastic problems — convex optimization and saddles. For problems of stochastic convex optimization, the existing solutions make it possible to compare online and offline algorithms in some detail. In particular, for strongly convex problems, the computational complexity of the algorithms is the same, and the condition of strong convexity can be weakened to the condition of $\gamma$-growth of the objective function. From this point of view, saddle point problems are much less studied. Nevertheless, existing solutions allow us to outline the main directions of research. Thus, significant progress has been made for bilinear saddle point problems using online algorithms. Offline algorithms are represented by just one study. In this paper, this example demonstrates the similarity of both algorithms with convex optimization. The issue of the accuracy of solving the auxiliary problem for saddles was also worked out. On the other hand, the saddle point problem of stochastic optimization generalizes the convex one, that is, it is its logical continuation. This is manifested in the fact that existing results from convex optimization can be transferred to saddles. In this paper, such a transfer is carried out for the results of the online algorithm in the convex case, when the objective function satisfies the $\gamma$-growth condition.

  7. Ostroukhov P.A.
    Tensor methods inside mixed oracle for min-min problems
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 377-398

    In this article we consider min-min type of problems or minimization by two groups of variables. In some way it is similar to classic min-max saddle point problem. Although, saddle point problems are usually more difficult in some way. Min-min problems may occur in case if some groups of variables in convex optimization have different dimensions or if these groups have different domains. Such problem structure gives us an ability to split the main task to subproblems, and allows to tackle it with mixed oracles. However existing articles on this topic cover only zeroth and first order oracles, in our work we consider high-order tensor methods to solve inner problem and fast gradient method to solve outer problem.

    We assume, that outer problem is constrained to some convex compact set, and for the inner problem we consider both unconstrained case and being constrained to some convex compact set. By definition, tensor methods use high-order derivatives, so the time per single iteration of the method depends a lot on the dimensionality of the problem it solves. Therefore, we suggest, that the dimension of the inner problem variable is not greater than 1000. Additionally, we need some specific assumptions to be able to use mixed oracles. Firstly, we assume, that the objective is convex in both groups of variables and its gradient by both variables is Lipschitz continuous. Secondly, we assume the inner problem is strongly convex and its gradient is Lipschitz continuous. Also, since we are going to use tensor methods for inner problem, we need it to be p-th order Lipschitz continuous ($p > 1$). Finally, we assume strong convexity of the outer problem to be able to use fast gradient method for strongly convex functions.

    We need to emphasize, that we use superfast tensor method to tackle inner subproblem in unconstrained case. And when we solve inner problem on compact set, we use accelerated high-order composite proximal method.

    Additionally, in the end of the article we compare the theoretical complexity of obtained methods with regular gradient method, which solves the mentioned problem as regular convex optimization problem and doesn’t take into account its structure (Remarks 1 and 2).

  8. Plokhotnikov K.E.
    The problem of choosing solutions in the classical format of the description of a molecular system
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1573-1600

    The numerical methods developed by the author recently for calculating the molecular system based on the direct solution of the Schrodinger equation by the Monte Carlo method have shown a huge uncertainty in the choice of solutions. On the one hand, it turned out to be possible to build many new solutions; on the other hand, the problem of their connection with reality has become sharply aggravated. In ab initio quantum mechanical calculations, the problem of choosing solutions is not so acute after the transition to the classical format of describing a molecular system in terms of potential energy, the method of molecular dynamics, etc. In this paper, we investigate the problem of choosing solutions in the classical format of describing a molecular system without taking into account quantum mechanical prerequisites. As it turned out, the problem of choosing solutions in the classical format of describing a molecular system is reduced to a specific marking of the configuration space in the form of a set of stationary points and reconstruction of the corresponding potential energy function. In this formulation, the solution of the choice problem is reduced to two possible physical and mathematical problems: to find all its stationary points for a given potential energy function (the direct problem of the choice problem), to reconstruct the potential energy function for a given set of stationary points (the inverse problem of the choice problem). In this paper, using a computational experiment, the direct problem of the choice problem is discussed using the example of a description of a monoatomic cluster. The number and shape of the locally equilibrium (saddle) configurations of the binary potential are numerically estimated. An appropriate measure is introduced to distinguish configurations in space. The format of constructing the entire chain of multiparticle contributions to the potential energy function is proposed: binary, threeparticle, etc., multiparticle potential of maximum partiality. An infinite number of locally equilibrium (saddle) configurations for the maximum multiparticle potential is discussed and illustrated. A method of variation of the number of stationary points by combining multiparticle contributions to the potential energy function is proposed. The results of the work listed above are aimed at reducing the huge arbitrariness of the choice of the form of potential that is currently taking place. Reducing the arbitrariness of choice is expressed in the fact that the available knowledge about the set of a very specific set of stationary points is consistent with the corresponding form of the potential energy function.

  9. Tomonin Y.D., Tominin V.D., Borodich E.D., Kovalev D.A., Dvurechensky P.E., Gasnikov A.V., Chukanov S.V.
    On Accelerated Methods for Saddle-Point Problems with Composite Structure
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 433-467

    We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and dual variables. First, we consider such problems with smooth composite terms, one of which has finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for noncomposite setting. Besides, our algorithms allow one to separate the complexity bounds, i. e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.

  10. Chen J., Lobanov A.V., Rogozin A.V.
    Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480

    Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.

    We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"