Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities

 pdf (233K)

In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.

For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.

Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our aforementioned algorithm with a global linear convergence and a local quadratic convergence rate.

Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.

Keywords: variational inequality, saddle point problem, high-order smoothness, tensor methods, gradient norm minimization
Citation in English: Ostroukhov P.A., Kamalov R.A., Dvurechensky P.E., Gasnikov A.V. Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities // Computer Research and Modeling, 2022, vol. 14, no. 2, pp. 357-376
Citation in English: Ostroukhov P.A., Kamalov R.A., Dvurechensky P.E., Gasnikov A.V. Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities // Computer Research and Modeling, 2022, vol. 14, no. 2, pp. 357-376
DOI: 10.20537/2076-7633-2022-14-2-357-376

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"