Результаты поиска по 'fast gradient method':
Найдено статей: 8
  1. Gasnikov A.V., Gorbunov E.A., Kovalev D.A., Mohammed A.A., Chernousova E.O.
    The global rate of convergence for optimal tensor methods in smooth convex optimization
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 737-753

    In this work we consider Monteiro – Svaiter accelerated hybrid proximal extragradient (A-HPE) framework and accelerated Newton proximal extragradient (A-NPE) framework. The last framework contains an optimal method for rather smooth convex optimization problems with second-order oracle. We generalize A-NPE framework for higher order derivative oracle (schemes). We replace Newton’s type step in A-NPE that was used for auxiliary problem by Newton’s regularized (tensor) type step (Yu. Nesterov, 2018). Moreover we generalize large step A-HPE/A-NPE framework by replacing Monteiro – Svaiter’s large step condition so that this framework could work for high-order schemes. The main contribution of the paper is as follows: we propose optimal highorder methods for convex optimization problems. As far as we know for that moment there exist only zero, first and second order optimal methods that work according to the lower bounds. For higher order schemes there exists a gap between the lower bounds (Arjevani, Shamir, Shiff, 2017) and existing high-order (tensor) methods (Nesterov – Polyak, 2006; Yu.Nesterov, 2008; M. Baes, 2009; Yu.Nesterov, 2018). Asymptotically the ratio of the rates of convergences for the best existing methods and lower bounds is about 1.5. In this work we eliminate this gap and show that lower bounds are tight. We also consider rather smooth strongly convex optimization problems and show how to generalize the proposed methods to this case. The basic idea is to use restart technique until iteration sequence reach the region of quadratic convergence of Newton method and then use Newton method. One can show that the considered method converges with optimal rates up to a logarithmic factor. Note, that proposed in this work technique can be generalized in the case when we can’t solve auxiliary problem exactly, moreover we can’t even calculate the derivatives of the functional exactly. Moreover, the proposed technique can be generalized to the composite optimization problems and in particular to the constraint convex optimization problems. We also formulate a list of open questions that arise around the main result of this paper (optimal universal method of high order e.t.c.).

    Views (last year): 75.
  2. Tyurin A.I.
    Primal-dual fast gradient method with a model
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 263-274

    In this work we consider a possibility to use the conception of $(\delta, L)$-model of a function for optimization tasks, whereby solving a primal problem there is a necessity to recover a solution of a dual problem. The conception of $(\delta, L)$-model is based on the conception of $(\delta, L)$-oracle which was proposed by Devolder–Glineur–Nesterov, herewith the authors proposed approximate a function with an upper bound using a convex quadratic function with some additive noise $\delta$. They managed to get convex quadratic upper bounds with noise even for nonsmooth functions. The conception of $(\delta, L)$-model continues this idea by using instead of a convex quadratic function a more complex convex function in an upper bound. Possibility to recover the solution of a dual problem gives great benefits in different problems, for instance, in some cases, it is faster to find a solution in a primal problem than in a dual problem. Note that primal-dual methods are well studied, but usually each class of optimization problems has its own primal-dual method. Our goal is to develop a method which can find solutions in different classes of optimization problems. This is realized through the use of the conception of $(\delta, L)$-model and adaptive structure of our methods. Thereby, we developed primal-dual adaptive gradient method and fast gradient method with $(\delta, L)$-model and proved convergence rates of the methods, moreover, for some classes of optimization problems the rates are optimal. The main idea is the following: we find a dual solution to an approximation of a primal problem using the conception of $(\delta, L)$-model. It is much easier to find a solution to an approximated problem, however, we have to do it in each step of our method, thereby the principle of “divide and conquer” is realized.

  3. Gladin E.L., Zainullina K.E.
    Ellipsoid method for convex stochastic optimization in small dimension
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1137-1147

    The article considers minimization of the expectation of convex function. Problems of this type often arise in machine learning and a variety of other applications. In practice, stochastic gradient descent (SGD) and similar procedures are usually used to solve such problems. We propose to use the ellipsoid method with mini-batching, which converges linearly and can be more efficient than SGD for a class of problems. This is verified by our experiments, which are publicly available. The algorithm does not require neither smoothness nor strong convexity of the objective to achieve linear convergence. Thus, its complexity does not depend on the conditional number of the problem. We prove that the method arrives at an approximate solution with given probability when using mini-batches of size proportional to the desired accuracy to the power −2. This enables efficient parallel execution of the algorithm, whereas possibilities for batch parallelization of SGD are rather limited. Despite fast convergence, ellipsoid method can result in a greater total number of calls to oracle than SGD, which works decently with small batches. Complexity is quadratic in dimension of the problem, hence the method is suitable for relatively small dimensionalities.

  4. Gladin E.L., Borodich E.D.
    Variance reduction for minimax problems with a small dimension of one of the variables
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275

    The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.

  5. Burago N.G., Nikitin I.S.
    Algorithms of through calculation for damage processes
    Computer Research and Modeling, 2018, v. 10, no. 5, pp. 645-666

    The paper reviews the existing approaches to calculating the destruction of solids. The main attention is paid to algorithms using a unified approach to the calculation of deformation both for nondestructive and for the destroyed states of the material. The thermodynamic derivation of the unified rheological relationships taking into account the elastic, viscous and plastic properties of materials and describing the loss of the deformation resistance ability with the accumulation of microdamages is presented. It is shown that the mathematical model under consideration provides a continuous dependence of the solution on input parameters (parameters of the material medium, initial and boundary conditions, discretization parameters) with softening of the material.

    Explicit and implicit non-matrix algorithms for calculating the evolution of deformation and fracture development are presented. Non-explicit schemes are implemented using iterations of the conjugate gradient method, with the calculation of each iteration exactly coinciding with the calculation of the time step for two-layer explicit schemes. So, the solution algorithms are very simple.

    The results of solving typical problems of destruction of solid deformable bodies for slow (quasistatic) and fast (dynamic) deformation processes are presented. Based on the experience of calculations, recommendations are given for modeling the processes of destruction and ensuring the reliability of numerical solutions.

    Views (last year): 24.
  6. Pletnev N.V.
    Fast adaptive by constants of strong-convexity and Lipschitz for gradient first order methods
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 947-963

    The work is devoted to the construction of efficient and applicable to real tasks first-order methods of convex optimization, that is, using only values of the target function and its derivatives. Construction uses OGMG, fast gradient method which is optimal by complexity, but requires to know the Lipschitz constant for gradient and the strong convexity constant to determine the number of steps and step length. This requirement makes practical usage very hard. An adaptive on the constant for strong convexity algorithm ACGM is proposed, based on restarts of the OGM-G with update of the strong convexity constant estimate, and an adaptive on the Lipschitz constant for gradient ALGM, in which the use of OGM-G restarts is supplemented by the selection of the Lipschitz constant with verification of the smoothness conditions used in the universal gradient descent method. This eliminates the disadvantages of the original method associated with the need to know these constants, which makes practical usage possible. Optimality of estimates for the complexity of the constructed algorithms is proved. To verify the results obtained, experiments on model functions and real tasks from machine learning are carried out.

  7. Ostroukhov P.A.
    Tensor methods inside mixed oracle for min-min problems
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 377-398

    In this article we consider min-min type of problems or minimization by two groups of variables. In some way it is similar to classic min-max saddle point problem. Although, saddle point problems are usually more difficult in some way. Min-min problems may occur in case if some groups of variables in convex optimization have different dimensions or if these groups have different domains. Such problem structure gives us an ability to split the main task to subproblems, and allows to tackle it with mixed oracles. However existing articles on this topic cover only zeroth and first order oracles, in our work we consider high-order tensor methods to solve inner problem and fast gradient method to solve outer problem.

    We assume, that outer problem is constrained to some convex compact set, and for the inner problem we consider both unconstrained case and being constrained to some convex compact set. By definition, tensor methods use high-order derivatives, so the time per single iteration of the method depends a lot on the dimensionality of the problem it solves. Therefore, we suggest, that the dimension of the inner problem variable is not greater than 1000. Additionally, we need some specific assumptions to be able to use mixed oracles. Firstly, we assume, that the objective is convex in both groups of variables and its gradient by both variables is Lipschitz continuous. Secondly, we assume the inner problem is strongly convex and its gradient is Lipschitz continuous. Also, since we are going to use tensor methods for inner problem, we need it to be p-th order Lipschitz continuous ($p > 1$). Finally, we assume strong convexity of the outer problem to be able to use fast gradient method for strongly convex functions.

    We need to emphasize, that we use superfast tensor method to tackle inner subproblem in unconstrained case. And when we solve inner problem on compact set, we use accelerated high-order composite proximal method.

    Additionally, in the end of the article we compare the theoretical complexity of obtained methods with regular gradient method, which solves the mentioned problem as regular convex optimization problem and doesn’t take into account its structure (Remarks 1 and 2).

  8. Pletnev N.V., Dvurechensky P.E., Gasnikov A.V.
    Application of gradient optimization methods to solve the Cauchy problem for the Helmholtz equation
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 417-444

    The article is devoted to studying the application of convex optimization methods to solve the Cauchy problem for the Helmholtz equation, which is ill-posed since the equation belongs to the elliptic type. The Cauchy problem is formulated as an inverse problem and is reduced to a convex optimization problem in a Hilbert space. The functional to be optimized and its gradient are calculated using the solution of boundary value problems, which, in turn, are well-posed and can be approximately solved by standard numerical methods, such as finite-difference schemes and Fourier series expansions. The convergence of the applied fast gradient method and the quality of the solution obtained in this way are experimentally investigated. The experiment shows that the accelerated gradient method — the Similar Triangle Method — converges faster than the non-accelerated method. Theorems on the computational complexity of the resulting algorithms are formulated and proved. It is found that Fourier’s series expansions are better than finite-difference schemes in terms of the speed of calculations and improve the quality of the solution obtained. An attempt was made to use restarts of the Similar Triangle Method after halving the residual of the functional. In this case, the convergence does not improve, which confirms the absence of strong convexity. The experiments show that the inaccuracy of the calculations is more adequately described by the additive concept of the noise in the first-order oracle. This factor limits the achievable quality of the solution, but the error does not accumulate. According to the results obtained, the use of accelerated gradient optimization methods can be the way to solve inverse problems effectively.

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"