Результаты поиска по 'primal-dual method':
Найдено статей: 7
  1. Tyurin A.I.
    Primal-dual fast gradient method with a model
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 263-274

    In this work we consider a possibility to use the conception of $(\delta, L)$-model of a function for optimization tasks, whereby solving a primal problem there is a necessity to recover a solution of a dual problem. The conception of $(\delta, L)$-model is based on the conception of $(\delta, L)$-oracle which was proposed by Devolder–Glineur–Nesterov, herewith the authors proposed approximate a function with an upper bound using a convex quadratic function with some additive noise $\delta$. They managed to get convex quadratic upper bounds with noise even for nonsmooth functions. The conception of $(\delta, L)$-model continues this idea by using instead of a convex quadratic function a more complex convex function in an upper bound. Possibility to recover the solution of a dual problem gives great benefits in different problems, for instance, in some cases, it is faster to find a solution in a primal problem than in a dual problem. Note that primal-dual methods are well studied, but usually each class of optimization problems has its own primal-dual method. Our goal is to develop a method which can find solutions in different classes of optimization problems. This is realized through the use of the conception of $(\delta, L)$-model and adaptive structure of our methods. Thereby, we developed primal-dual adaptive gradient method and fast gradient method with $(\delta, L)$-model and proved convergence rates of the methods, moreover, for some classes of optimization problems the rates are optimal. The main idea is the following: we find a dual solution to an approximation of a primal problem using the conception of $(\delta, L)$-model. It is much easier to find a solution to an approximated problem, however, we have to do it in each step of our method, thereby the principle of “divide and conquer” is realized.

  2. Alkousa M.S., Gasnikov A.V., Dvurechensky P.E., Sadiev A.A., Razouk L.Ya.
    An approach for the nonconvex uniformly concave structured saddle point problem
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 225-237

    Recently, saddle point problems have received much attention due to their powerful modeling capability for a lot of problems from diverse domains. Applications of these problems occur in many applied areas, such as robust optimization, distributed optimization, game theory, and many applications in machine learning such as empirical risk minimization and generative adversarial networks training. Therefore, many researchers have actively worked on developing numerical methods for solving saddle point problems in many different settings. This paper is devoted to developing a numerical method for solving saddle point problems in the nonconvex uniformly-concave setting. We study a general class of saddle point problems with composite structure and H\"older-continuous higher-order derivatives. To solve the problem under consideration, we propose an approach in which we reduce the problem to a combination of two auxiliary optimization problems separately for each group of variables, the outer minimization problem w.r.t. primal variables, and the inner maximization problem w.r.t the dual variables. For solving the outer minimization problem, we use the Adaptive Gradient Method, which is applicable for nonconvex problems and also works with an inexact oracle that is generated by approximately solving the inner problem. For solving the inner maximization problem, we use the Restarted Unified Acceleration Framework, which is a framework that unifies the high-order acceleration methods for minimizing a convex function that has H\"older-continuous higher-order derivatives. Separate complexity bounds are provided for the number of calls to the first-order oracles for the outer minimization problem and higher-order oracles for the inner maximization problem. Moreover, the complexity of the whole proposed approach is then estimated.

  3. Pham C.T., Tran T.T., Dang H.P.
    Image noise removal method based on nonconvex total generalized variation and primal-dual algorithm
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 527-541

    In various applications, i. e., astronomical imaging, electron microscopy, and tomography, images are often damaged by Poisson noise. At the same time, the thermal motion leads to Gaussian noise. Therefore, in such applications, the image is usually corrupted by mixed Poisson – Gaussian noise.

    In this paper, we propose a novel method for recovering images corrupted by mixed Poisson – Gaussian noise. In the proposed method, we develop a total variation-based model connected with the nonconvex function and the total generalized variation regularization, which overcomes the staircase artifacts and maintains neat edges.

    Numerically, we employ the primal-dual method combined with the classical iteratively reweighted $l_1$ algorithm to solve our minimization problem. Experimental results are provided to demonstrate the superiority of our proposed model and algorithm for mixed Poisson – Gaussian removal to state-of-the-art numerical methods.

  4. Gasnikov A.V., Kubentayeva M.B.
    Searching stochastic equilibria in transport networks by universal primal-dual gradient method
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345

    We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.

    Views (last year): 28.
  5. Ostroukhov P.A., Kamalov R.A., Dvurechensky P.E., Gasnikov A.V.
    Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 357-376

    In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.

    For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.

    Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our aforementioned algorithm with a global linear convergence and a local quadratic convergence rate.

    Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.

  6. Tomonin Y.D., Tominin V.D., Borodich E.D., Kovalev D.A., Dvurechensky P.E., Gasnikov A.V., Chukanov S.V.
    On Accelerated Methods for Saddle-Point Problems with Composite Structure
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 433-467

    We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and dual variables. First, we consider such problems with smooth composite terms, one of which has finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for noncomposite setting. Besides, our algorithms allow one to separate the complexity bounds, i. e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.

  7. Tupitsa N.K.
    On accelerated adaptive methods and their modifications for alternating minimization
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 497-515

    In the first part of the paper we present convergence analysis of AGMsDR method on a new class of functions — in general non-convex with $M$-Lipschitz-continuous gradients that satisfy Polyak – Lojasiewicz condition. Method does not need the value of $\mu^{PL}>0$ in the condition and converges linearly with a scale factor $\left(1 - \frac{\mu^{PL}}{M}\right)$. It was previously proved that method converges as $O\left(\frac1{k^2}\right)$ if a function is convex and has $M$-Lipschitz-continuous gradient and converges linearly with a~scale factor $\left(1 - \sqrt{\frac{\mu^{SC}}{M}}\right)$ if the value of strong convexity parameter $\mu^{SC}>0$ is known. The novelty is that one can save linear convergence if $\frac{\mu^{PL}}{\mu^{SC}}$ is not known, but without square root in the scale factor.

    The second part presents modification of AGMsDR method for solving problems that allow alternating minimization (Alternating AGMsDR). The similar results are proved.

    As the result, we present adaptive accelerated methods that converge as $O\left(\min\left\lbrace\frac{M}{k^2},\,\left(1-{\frac{\mu^{PL}}{M}}\right)^{(k-1)}\right\rbrace\right)$ on a class of convex functions with $M$-Lipschitz-continuous gradient that satisfy Polyak – Lojasiewicz condition. Algorithms do not need values of $M$ and $\mu^{PL}$. If Polyak – Lojasiewicz condition does not hold, the convergence is $O\left(\frac1{k^2}\right)$, but no tuning needed.

    We also consider the adaptive catalyst envelope of non-accelerated gradient methods. The envelope allows acceleration up to $O\left(\frac1{k^2}\right)$. We present numerical comparison of non-accelerated adaptive gradient descent which is accelerated using adaptive catalyst envelope with AGMsDR, Alternating AGMsDR, APDAGD (Adaptive Primal-Dual Accelerated Gradient Descent) and Sinkhorn's algorithm on the problem dual to the optimal transport problem.

    Conducted experiments show faster convergence of alternating AGMsDR in comparison with described catalyst approach and AGMsDR, despite the same asymptotic rate $O\left(\frac1{k^2}\right)$. Such behavior can be explained by linear convergence of AGMsDR method and was tested on quadratic functions. Alternating AGMsDR demonstrated better performance in comparison with AGMsDR.

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"