Результаты поиска по 'nonconvex optimization':
Найдено статей: 5
  1. Alkousa M.S., Gasnikov A.V., Dvurechensky P.E., Sadiev A.A., Razouk L.Ya.
    An approach for the nonconvex uniformly concave structured saddle point problem
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 225-237

    Recently, saddle point problems have received much attention due to their powerful modeling capability for a lot of problems from diverse domains. Applications of these problems occur in many applied areas, such as robust optimization, distributed optimization, game theory, and many applications in machine learning such as empirical risk minimization and generative adversarial networks training. Therefore, many researchers have actively worked on developing numerical methods for solving saddle point problems in many different settings. This paper is devoted to developing a numerical method for solving saddle point problems in the nonconvex uniformly-concave setting. We study a general class of saddle point problems with composite structure and H\"older-continuous higher-order derivatives. To solve the problem under consideration, we propose an approach in which we reduce the problem to a combination of two auxiliary optimization problems separately for each group of variables, the outer minimization problem w.r.t. primal variables, and the inner maximization problem w.r.t the dual variables. For solving the outer minimization problem, we use the Adaptive Gradient Method, which is applicable for nonconvex problems and also works with an inexact oracle that is generated by approximately solving the inner problem. For solving the inner maximization problem, we use the Restarted Unified Acceleration Framework, which is a framework that unifies the high-order acceleration methods for minimizing a convex function that has H\"older-continuous higher-order derivatives. Separate complexity bounds are provided for the number of calls to the first-order oracles for the outer minimization problem and higher-order oracles for the inner maximization problem. Moreover, the complexity of the whole proposed approach is then estimated.

  2. Bazarova A.I., Beznosikov A.N., Gasnikov A.V.
    Linearly convergent gradient-free methods for minimization of parabolic approximation
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 239-255

    Finding the global minimum of a nonconvex function is one of the key and most difficult problems of the modern optimization. In this paper we consider special classes of nonconvex problems which have a clear and distinct global minimum.

    In the first part of the paper we consider two classes of «good» nonconvex functions, which can be bounded below and above by a parabolic function. This class of problems has not been widely studied in the literature, although it is rather interesting from an applied point of view. Moreover, for such problems first-order and higher-order methods may be completely ineffective in finding a global minimum. This is due to the fact that the function may oscillate heavily or may be very noisy. Therefore, our new methods use only zero-order information and are based on grid search. The size and fineness of this grid, and hence the guarantee of convergence speed and oracle complexity, depend on the «goodness» of the problem. In particular, we show that if the function is bounded by fairly close parabolic functions, then the complexity is independent of the dimension of the problem. We show that our new methods converge with a linear convergence rate $\log(1/\varepsilon)$ to a global minimum on the cube.

    In the second part of the paper, we consider the nonconvex optimization problem from a different angle. We assume that the target minimizing function is the sum of the convex quadratic problem and a nonconvex «noise» function proportional to the distance to the global solution. Considering functions with such noise assumptions for zero-order methods is new in the literature. For such a problem, we use the classical gradient-free approach with gradient approximation through finite differences. We show how the convergence analysis for our problems can be reduced to the standard analysis for convex optimization problems. In particular, we achieve a linear convergence rate for such problems as well.

    Experimental results confirm the efficiency and practical applicability of all the obtained methods.

  3. Yudin N.E.
    Modified Gauss–Newton method for solving a smooth system of nonlinear equations
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 697-723

    In this paper, we introduce a new version of Gauss–Newton method for solving a system of nonlinear equations based on ideas of the residual upper bound for a system of nonlinear equations and a quadratic regularization term. The introduced Gauss–Newton method in practice virtually forms the whole parameterized family of the methods solving systems of nonlinear equations and regression problems. The developed family of Gauss–Newton methods completely consists of iterative methods with generalization for cases of non-euclidean normed spaces, including special forms of Levenberg–Marquardt algorithms. The developed methods use the local model based on a parameterized proximal mapping allowing us to use an inexact oracle of «black–box» form with restrictions for the computational precision and computational complexity. We perform an efficiency analysis including global and local convergence for the developed family of methods with an arbitrary oracle in terms of iteration complexity, precision and complexity of both local model and oracle, problem dimensionality. We present global sublinear convergence rates for methods of the proposed family for solving a system of nonlinear equations, consisting of Lipschitz smooth functions. We prove local superlinear convergence under extra natural non-degeneracy assumptions for system of nonlinear functions. We prove both local and global linear convergence for a system of nonlinear equations under Polyak–Lojasiewicz condition for proposed Gauss– Newton methods. Besides theoretical justifications of methods we also consider practical implementation issues. In particular, for conducted experiments we present effective computational schemes for the exact oracle regarding to the dimensionality of a problem. The proposed family of methods unites several existing and frequent in practice Gauss–Newton method modifications, allowing us to construct a flexible and convenient method implementable using standard convex optimization and computational linear algebra techniques.

  4. Danilova M.Y., Malinovskiy G.S.
    Averaged heavy-ball method
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 277-308

    First-order optimization methods are workhorses in a wide range of modern applications in economics, physics, biology, machine learning, control, and other fields. Among other first-order methods accelerated and momentum ones obtain special attention because of their practical efficiency. The heavy-ball method (HB) is one of the first momentum methods. The method was proposed in 1964 and the first analysis was conducted for quadratic strongly convex functions. Since then a number of variations of HB have been proposed and analyzed. In particular, HB is known for its simplicity in implementation and its performance on nonconvex problems. However, as other momentum methods, it has nonmonotone behavior, and for optimal parameters, the method suffers from the so-called peak effect. To address this issue, in this paper, we consider an averaged version of the heavy-ball method (AHB). We show that for quadratic problems AHB has a smaller maximal deviation from the solution than HB. Moreover, for general convex and strongly convex functions, we prove non-accelerated rates of global convergence of AHB, its weighted version WAHB, and for AHB with restarts R-AHB. To the best of our knowledge, such guarantees for HB with averaging were not explicitly proven for strongly convex problems in the existing works. Finally, we conduct several numerical experiments on minimizing quadratic and nonquadratic functions to demonstrate the advantages of using averaging for HB. Moreover, we also tested one more modification of AHB called the tail-averaged heavy-ball method (TAHB). In the experiments, we observed that HB with a properly adjusted averaging scheme converges faster than HB without averaging and has smaller oscillations.

  5. Dvurechensky P.E.
    A gradient method with inexact oracle for composite nonconvex optimization
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 321-334

    In this paper, we develop a new first-order method for composite nonconvex minimization problems with simple constraints and inexact oracle. The objective function is given as a sum of «hard», possibly nonconvex part, and «simple» convex part. Informally speaking, oracle inexactness means that, for the «hard» part, at any point we can approximately calculate the value of the function and construct a quadratic function, which approximately bounds this function from above. We give several examples of such inexactness: smooth nonconvex functions with inexact H¨older-continuous gradient, functions given by the auxiliary uniformly concave maximization problem, which can be solved only approximately. For the introduced class of problems, we propose a gradient-type method, which allows one to use a different proximal setup to adapt to the geometry of the feasible set, adaptively chooses controlled oracle error, allows for inexact proximal mapping. We provide a convergence rate for our method in terms of the norm of generalized gradient mapping and show that, in the case of an inexact Hölder-continuous gradient, our method is universal with respect to Hölder parameters of the problem. Finally, in a particular case, we show that the small value of the norm of generalized gradient mapping at a point means that a necessary condition of local minimum approximately holds at that point.

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"