Processing math: 100%
Результаты поиска по 'iterative method':
Найдено статей: 58
  1. Ostroukhov P.A.
    Tensor methods inside mixed oracle for min-min problems
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 377-398

    In this article we consider min-min type of problems or minimization by two groups of variables. In some way it is similar to classic min-max saddle point problem. Although, saddle point problems are usually more difficult in some way. Min-min problems may occur in case if some groups of variables in convex optimization have different dimensions or if these groups have different domains. Such problem structure gives us an ability to split the main task to subproblems, and allows to tackle it with mixed oracles. However existing articles on this topic cover only zeroth and first order oracles, in our work we consider high-order tensor methods to solve inner problem and fast gradient method to solve outer problem.

    We assume, that outer problem is constrained to some convex compact set, and for the inner problem we consider both unconstrained case and being constrained to some convex compact set. By definition, tensor methods use high-order derivatives, so the time per single iteration of the method depends a lot on the dimensionality of the problem it solves. Therefore, we suggest, that the dimension of the inner problem variable is not greater than 1000. Additionally, we need some specific assumptions to be able to use mixed oracles. Firstly, we assume, that the objective is convex in both groups of variables and its gradient by both variables is Lipschitz continuous. Secondly, we assume the inner problem is strongly convex and its gradient is Lipschitz continuous. Also, since we are going to use tensor methods for inner problem, we need it to be p-th order Lipschitz continuous (p>1). Finally, we assume strong convexity of the outer problem to be able to use fast gradient method for strongly convex functions.

    We need to emphasize, that we use superfast tensor method to tackle inner subproblem in unconstrained case. And when we solve inner problem on compact set, we use accelerated high-order composite proximal method.

    Additionally, in the end of the article we compare the theoretical complexity of obtained methods with regular gradient method, which solves the mentioned problem as regular convex optimization problem and doesn’t take into account its structure (Remarks 1 and 2).

  2. Stonyakin F.S., Ablaev S.S., Baran I.V., Alkousa M.S.
    Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 393-412

    The work is devoted to the study of subgradient methods with different variations of the Polyak stepsize for minimization functions from the class of weakly convex and relatively weakly convex functions that have the corresponding analogue of a sharp minimum. It turns out that, under certain assumptions about the starting point, such an approach can make it possible to justify the convergence of the subgradient method with the speed of a geometric progression. For the subgradient method with the Polyak stepsize, a refined estimate for the rate of convergence is proved for minimization problems for weakly convex functions with a sharp minimum. The feature of this estimate is an additional consideration of the decrease of the distance from the current point of the method to the set of solutions with the increase in the number of iterations. The results of numerical experiments for the phase reconstruction problem (which is weakly convex and has a sharp minimum) are presented, demonstrating the effectiveness of the proposed approach to estimating the rate of convergence compared to the known one. Next, we propose a variation of the subgradient method with switching over productive and non-productive steps for weakly convex problems with inequality constraints and obtain the corresponding analog of the result on convergence with the rate of geometric progression. For the subgradient method with the corresponding variation of the Polyak stepsize on the class of relatively Lipschitz and relatively weakly convex functions with a relative analogue of a sharp minimum, it was obtained conditions that guarantee the convergence of such a subgradient method at the rate of a geometric progression. Finally, a theoretical result is obtained that describes the influence of the error of the information about the (sub)gradient available by the subgradient method and the objective function on the estimation of the quality of the obtained approximate solution. It is proved that for a sufficiently small error δ>0, one can guarantee that the accuracy of the solution is comparable to δ.

  3. Stonyakin F.S., Savchuk O.S., Baran I.V., Alkousa M.S., Titov A.A.
    Analogues of the relative strong convexity condition for relatively smooth problems and adaptive gradient-type methods
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 413-432

    This paper is devoted to some variants of improving the convergence rate guarantees of the gradient-type algorithms for relatively smooth and relatively Lipschitz-continuous problems in the case of additional information about some analogues of the strong convexity of the objective function. We consider two classes of problems, namely, convex problems with a relative functional growth condition, and problems (generally, non-convex) with an analogue of the Polyak – Lojasiewicz gradient dominance condition with respect to Bregman divergence. For the first type of problems, we propose two restart schemes for the gradient type methods and justify theoretical estimates of the convergence of two algorithms with adaptively chosen parameters corresponding to the relative smoothness or Lipschitz property of the objective function. The first of these algorithms is simpler in terms of the stopping criterion from the iteration, but for this algorithm, the near-optimal computational guarantees are justified only on the class of relatively Lipschitz-continuous problems. The restart procedure of another algorithm, in its turn, allowed us to obtain more universal theoretical results. We proved a near-optimal estimate of the complexity on the class of convex relatively Lipschitz continuous problems with a functional growth condition. We also obtained linear convergence rate guarantees on the class of relatively smooth problems with a functional growth condition. For a class of problems with an analogue of the gradient dominance condition with respect to the Bregman divergence, estimates of the quality of the output solution were obtained using adaptively selected parameters. We also present the results of some computational experiments illustrating the performance of the methods for the second approach at the conclusion of the paper. As examples, we considered a linear inverse Poisson problem (minimizing the Kullback – Leibler divergence), its regularized version which allows guaranteeing a relative strong convexity of the objective function, as well as an example of a relatively smooth and relatively strongly convex problem. In particular, calculations show that a relatively strongly convex function may not satisfy the relative variant of the gradient dominance condition.

  4. Savchuk O.S., Alkousa M.S., Stonyakin F.S.
    On some mirror descent methods for strongly convex programming problems with Lipschitz functional constraints
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1727-1746

    The paper is devoted to one approach to constructing subgradient methods for strongly convex programming problems with several functional constraints. More precisely, the strongly convex minimization problem with several strongly convex (inequality-type) constraints is considered, and first-order optimization methods for this class of problems are proposed. The special feature of the proposed methods is the possibility of using the strong convexity parameters of the violated functional constraints at nonproductive iterations, in theoretical estimates of the quality of the produced solution by the methods. The main task, to solve the considered problem, is to propose a subgradient method with adaptive rules for selecting steps and stopping rule of the method. The key idea of the proposed methods in this paper is to combine two approaches: a scheme with switching on productive and nonproductive steps and recently proposed modifications of mirror descent for convex programming problems, allowing to ignore some of the functional constraints on nonproductive steps of the algorithms. In the paper, it was described a subgradient method with switching by productive and nonproductive steps for strongly convex programming problems in the case where the objective function and functional constraints satisfy the Lipschitz condition. An analog of the proposed subgradient method, a mirror descent scheme for problems with relatively Lipschitz and relatively strongly convex objective functions and constraints is also considered. For the proposed methods, it obtained theoretical estimates of the quality of the solution, they indicate the optimality of these methods from the point of view of lower oracle estimates. In addition, since in many problems, the operation of finding the exact subgradient vector is quite expensive, then for the class of problems under consideration, analogs of the mentioned above methods with the replacement of the usual subgradient of the objective function or functional constraints by the δ-subgradient were investigated. The noted approach can save computational costs of the method by refusing to require the availability of the exact value of the subgradient at the current point. It is shown that the quality estimates of the solution change by O(δ). The results of numerical experiments illustrating the advantages of the proposed methods in comparison with some previously known ones are also presented.

  5. Ablaev S.S., Makarenko D.V., Stonyakin F.S., Alkousa M.S., Baran I.V.
    Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 473-495

    Non-smooth optimization often arises in many applied problems. The issues of developing efficient computational procedures for such problems in high-dimensional spaces are very topical. First-order methods (subgradient methods) are well applicable here, but in fairly general situations they lead to low speed guarantees for large-scale problems. One of the approaches to this type of problem can be to identify a subclass of non-smooth problems that allow relatively optimistic results on the rate of convergence. For example, one of the options for additional assumptions can be the condition of a sharp minimum, proposed in the late 1960s by B. T. Polyak. In the case of the availability of information about the minimal value of the function for Lipschitz-continuous problems with a sharp minimum, it turned out to be possible to propose a subgradient method with a Polyak step-size, which guarantees a linear rate of convergence in the argument. This approach made it possible to cover a number of important applied problems (for example, the problem of projecting onto a convex compact set). However, both the condition of the availability of the minimal value of the function and the condition of a sharp minimum itself look rather restrictive. In this regard, in this paper, we propose a generalized condition for a sharp minimum, somewhat similar to the inexact oracle proposed recently by Devolder – Glineur – Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover, the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex nonsmooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak β-quasi-convexity and ordinary quasiconvexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the δ-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence onto the feasible set of the problem.

  6. Irkhin I.A., Bulatov V.G., Vorontsov K.V.
    Additive regularizarion of topic models with fast text vectorizartion
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528

    The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.

  7. Stonyakin F.S., Lushko Е.A., Trеtiak I.D., Ablaev S.S.
    Subgradient methods for weakly convex problems with a sharp minimum in the case of inexact information about the function or subgradient
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1765-1778

    The problem of developing efficient numerical methods for non-convex (including non-smooth) problems is relevant due to their widespread use of such problems in applications. This paper is devoted to subgradient methods for minimizing Lipschitz μ-weakly convex functions, which are not necessarily smooth. It is well known that subgradient methods have low convergence rates in high-dimensional spaces even for convex functions. However, if we consider a subclass of functions that satisfies sharp minimum condition and also use the Polyak step, we can guarantee a linear convergence rate of the subgradient method. In some cases, the values of the function or it’s subgradient may be available to the numerical method with some error. The accuracy of the solution provided by the numerical method depends on the magnitude of this error. In this paper, we investigate the behavior of the subgradient method with a Polyak step when inaccurate information about the objective function value or subgradient is used in iterations. We prove that with a specific choice of starting point, the subgradient method with some analogue of the Polyak step-size converges at a geometric progression rate on a class of μ-weakly convex functions with a sharp minimum, provided that there is additive inaccuracy in the subgradient values. In the case when both the value of the function and the value of its subgradient at the current point are known with error, convergence to some neighborhood of the set of exact solutions is shown and the quality estimates of the output solution by the subgradient method with the corresponding analogue of the Polyak step are obtained. The article also proposes a subgradient method with a clipped step, and an assessment of the quality of the solution obtained by this method for the class of μ-weakly convex functions with a sharp minimum is presented. Numerical experiments were conducted for the problem of low-rank matrix recovery. They showed that the efficiency of the studied algorithms may not depend on the accuracy of localization of the initial approximation within the required region, and the inaccuracy in the values of the function and subgradient may affect the number of iterations required to achieve an acceptable quality of the solution, but has almost no effect on the quality of the solution itself.

  8. Yudin N.E., Gasnikov A.V.
    Regularization and acceleration of Gauss – Newton method
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1829-1840

    We propose a family of Gauss –Newton methods for solving optimization problems and systems of nonlinear equations based on the ideas of using the upper estimate of the norm of the residual of the system of nonlinear equations and quadratic regularization. The paper presents a development of the «Three Squares Method» scheme with the addition of a momentum term to the update rule of the sought parameters in the problem to be solved. The resulting scheme has several remarkable properties. First, the paper algorithmically describes a whole parametric family of methods that minimize functionals of a special kind: compositions of the residual of a nonlinear equation and an unimodal functional. Such a functional, entirely consistent with the «gray box» paradigm in the problem description, combines a large number of solvable problems related to applications in machine learning, with the regression problems. Secondly, the obtained family of methods is described as a generalization of several forms of the Levenberg –Marquardt algorithm, allowing implementation in non-Euclidean spaces as well. The algorithm describing the parametric family of Gauss –Newton methods uses an iterative procedure that performs an inexact parametrized proximal mapping and shift using a momentum term. The paper contains a detailed analysis of the efficiency of the proposed family of Gauss – Newton methods; the derived estimates take into account the number of external iterations of the algorithm for solving the main problem, the accuracy and computational complexity of the local model representation and oracle computation. Sublinear and linear convergence conditions based on the Polak – Lojasiewicz inequality are derived for the family of methods. In both observed convergence regimes, the Lipschitz property of the residual of the nonlinear system of equations is locally assumed. In addition to the theoretical analysis of the scheme, the paper studies the issues of its practical implementation. In particular, in the experiments conducted for the suboptimal step, the schemes of effective calculation of the approximation of the best step are given, which makes it possible to improve the convergence of the method in practice in comparison with the original «Three Square Method». The proposed scheme combines several existing and frequently used in practice modifications of the Gauss –Newton method, in addition, the paper proposes a monotone momentum modification of the family of developed methods, which does not slow down the search for a solution in the worst case and demonstrates in practice an improvement in the convergence of the method.

Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"