Результаты поиска по 'Lipschitz functional':
Найдено статей: 17
  1. Alkousa M.S.
    On some stochastic mirror descent methods for constrained online optimization problems
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 205-217

    The problem of online convex optimization naturally occurs in cases when there is an update of statistical information. The mirror descent method is well known for non-smooth optimization problems. Mirror descent is an extension of the subgradient method for solving non-smooth convex optimization problems in the case of a non-Euclidean distance. This paper is devoted to a stochastic variant of recently proposed Mirror Descent methods for convex online optimization problems with convex Lipschitz (generally, non-smooth) functional constraints. This means that we can still use the value of the functional constraint, but instead of (sub)gradient of the objective functional and the functional constraint, we use their stochastic (sub)gradients. More precisely, assume that on a closed subset of $n$-dimensional vector space, $N$ convex Lipschitz non-smooth functionals are given. The problem is to minimize the arithmetic mean of these functionals with a convex Lipschitz constraint. Two methods are proposed, for solving this problem, using stochastic (sub)gradients: adaptive method (does not require knowledge of Lipschitz constant neither for the objective functional, nor for the functional of constraint) and non-adaptivemethod (requires knowledge of Lipschitz constant for the objective functional and the functional of constraint). Note that it is allowed to calculate the stochastic (sub)gradient of each functional only once. In the case of non-negative regret, we find that the number of non-productive steps is $O$($N$), which indicates the optimality of the proposed methods. We consider an arbitrary proximal structure, which is essential for decisionmaking problems. The results of numerical experiments are presented, allowing to compare the work of adaptive and non-adaptive methods for some examples. It is shown that the adaptive method can significantly improve the number of the found solutions.

    Views (last year): 42.
  2. Pasechnyuk D.A., Stonyakin F.S.
    One method for minimization a convex Lipschitz-continuous function of two variables on a fixed square
    Computer Research and Modeling, 2019, v. 11, no. 3, pp. 379-395

    In the article we have obtained some estimates of the rate of convergence for the recently proposed by Yu. E.Nesterov method of minimization of a convex Lipschitz-continuous function of two variables on a square with a fixed side. The idea of the method is to divide the square into smaller parts and gradually remove them so that in the remaining sufficiently small part. The method consists in solving auxiliary problems of one-dimensional minimization along the separating segments and does not imply the calculation of the exact value of the gradient of the objective functional. The main result of the paper is proved in the class of smooth convex functions having a Lipschitz-continuous gradient. Moreover, it is noted that the property of Lipschitzcontinuity for gradient is sufficient to require not on the whole square, but only on some segments. It is shown that the method can work in the presence of errors in solving auxiliary one-dimensional problems, as well as in calculating the direction of gradients. Also we describe the situation when it is possible to neglect or reduce the time spent on solving auxiliary one-dimensional problems. For some examples, experiments have demonstrated that the method can work effectively on some classes of non-smooth functions. In this case, an example of a simple non-smooth function is constructed, for which, if the subgradient is chosen incorrectly, even if the auxiliary one-dimensional problem is exactly solved, the convergence property of the method may not hold. Experiments have shown that the method under consideration can achieve the desired accuracy of solving the problem in less time than the other methods (gradient descent and ellipsoid method) considered. Partially, it is noted that with an increase in the accuracy of the desired solution, the operating time for the Yu. E. Nesterov’s method can grow slower than the time of the ellipsoid method.

    Views (last year): 34.
  3. Chernyaev A.P., Chernyaeva S.A.
    Features of numerical solutions of some problems for cnoidal waves as periodic solutions of the Korteweg – de Vries
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 885-901

    This article discusses the features of the numerical solutions of some problems for cnoidal waves, which are periodic solutions of the classical Korteweg – de Vries equation of the traveling wave type. Exact solutions describing these waves were obtained by communicating the autowave approximation of the Korteweg – de Vries equation to ordinary functions of the third, second, and finally, first orders. Referring to a numerical example shows that in this way ordinary differential equations are not equivalent. The theorem formulated and proved in this article and the remark to it include the set of solutions of the first and second order, which, in their ordinal, are not equivalent. The ordinary differential equation of the first order obtained by the autowave approximation for the description of a cnoidal wave (a periodic solution) and a soliton (a solitary wave). Despite this, from a computational point of view, this equation is the most inconvenient. For this equation, the Lipschitz condition for the sought-for function is not satisfied in the neighborhood of constant solutions. Hence, the existence theorem and the unique solutions of the Cauchy problem for an ordinary differential equation of the first order are not valid. In particular, the uniqueness of the solution to the Cauchy problem is violated at stationary points. Therefore, for an ordinary differential equation of the first order, obtained from the Korteweg – de Vries equation, both in the case of a cnoidal wave and in the case of a soliton, the Cauchy problem cannot be posed at the extremum points. The first condition can be a set position between adjacent extremum points. But for the second, third and third orders, the initial conditions can be set at the growth points and at the extremum points. In this case, the segment for the numerical solution greatly expands and periodicity is observed. For the solutions of these ordinary solutions, the statements of the Cauchy problems are studied, and the results are compared with exact solutions and with each other. A numerical realization of the transformation of a cnoidal wave into a soliton is shown. The results of the article have a hemodynamic interpretation of the pulsating blood flow in a cylindrical blood vessel consisting of elastic rings.

  4. Akindinov G.D., Matyukhin V.V., Krivorotko O.I.
    Numerical solving of an inverse problem of a hyperbolic heat equation with small parameter
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 245-258

    In this paper we describe an algorithm of numerical solving of an inverse problem on a hyperbolic heat equation with additional second time derivative with a small parameter. The problem in this case is finding an initial distribution with given final distribution. This algorithm allows finding a solution to the problem for any admissible given precision. Algorithm allows evading difficulties analogous to the case of heat equation with inverted time. Furthermore, it allows finding an optimal grid size by learning on a relatively big grid size and small amount of iterations of a gradient method and later extrapolates to the required grid size using Richardson’s method. This algorithm allows finding an adequate estimate of Lipschitz constant for the gradient of the target functional. Finally, this algorithm may easily be applied to the problems with similar structure, for example in solving equations for plasma, social processes and various biological problems. The theoretical novelty of the paper consists in the developing of an optimal procedure of finding of the required grid size using Richardson extrapolations for optimization problems with inexact gradient in ill-posed problems.

  5. Stonyakin F.S., Stepanov A.N., Gasnikov A.V., Titov A.A.
    Mirror descent for constrained optimization problems with large subgradient values of functional constraints
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 301-317

    The paper is devoted to the problem of minimization of the non-smooth functional $f$ with a non-positive non-smooth Lipschitz-continuous functional constraint. We consider the formulation of the problem in the case of quasi-convex functionals. We propose new strategies of step-sizes and adaptive stopping rules in Mirror Descent for the considered class of problems. It is shown that the methods are applicable to the objective functionals of various levels of smoothness. Applying a special restart technique to the considered version of Mirror Descent there was proposed an optimal method for optimization problems with strongly convex objective functionals. Estimates of the rate of convergence for the considered methods are obtained depending on the level of smoothness of the objective functional. These estimates indicate the optimality of the considered methods from the point of view of the theory of lower oracle bounds. In particular, the optimality of our approach for Höldercontinuous quasi-convex (sub)differentiable objective functionals is proved. In addition, the case of a quasiconvex objective functional and functional constraint was considered. In this paper, we consider the problem of minimizing a non-smooth functional $f$ in the presence of a Lipschitz-continuous non-positive non-smooth functional constraint $g$, and the problem statement in the cases of quasi-convex and strongly (quasi-)convex functionals is considered separately. The paper presents numerical experiments demonstrating the advantages of using the considered methods.

  6. Yudin N.E.
    Modified Gauss–Newton method for solving a smooth system of nonlinear equations
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 697-723

    In this paper, we introduce a new version of Gauss–Newton method for solving a system of nonlinear equations based on ideas of the residual upper bound for a system of nonlinear equations and a quadratic regularization term. The introduced Gauss–Newton method in practice virtually forms the whole parameterized family of the methods solving systems of nonlinear equations and regression problems. The developed family of Gauss–Newton methods completely consists of iterative methods with generalization for cases of non-euclidean normed spaces, including special forms of Levenberg–Marquardt algorithms. The developed methods use the local model based on a parameterized proximal mapping allowing us to use an inexact oracle of «black–box» form with restrictions for the computational precision and computational complexity. We perform an efficiency analysis including global and local convergence for the developed family of methods with an arbitrary oracle in terms of iteration complexity, precision and complexity of both local model and oracle, problem dimensionality. We present global sublinear convergence rates for methods of the proposed family for solving a system of nonlinear equations, consisting of Lipschitz smooth functions. We prove local superlinear convergence under extra natural non-degeneracy assumptions for system of nonlinear functions. We prove both local and global linear convergence for a system of nonlinear equations under Polyak–Lojasiewicz condition for proposed Gauss– Newton methods. Besides theoretical justifications of methods we also consider practical implementation issues. In particular, for conducted experiments we present effective computational schemes for the exact oracle regarding to the dimensionality of a problem. The proposed family of methods unites several existing and frequent in practice Gauss–Newton method modifications, allowing us to construct a flexible and convenient method implementable using standard convex optimization and computational linear algebra techniques.

  7. Pletnev N.V.
    Fast adaptive by constants of strong-convexity and Lipschitz for gradient first order methods
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 947-963

    The work is devoted to the construction of efficient and applicable to real tasks first-order methods of convex optimization, that is, using only values of the target function and its derivatives. Construction uses OGMG, fast gradient method which is optimal by complexity, but requires to know the Lipschitz constant for gradient and the strong convexity constant to determine the number of steps and step length. This requirement makes practical usage very hard. An adaptive on the constant for strong convexity algorithm ACGM is proposed, based on restarts of the OGM-G with update of the strong convexity constant estimate, and an adaptive on the Lipschitz constant for gradient ALGM, in which the use of OGM-G restarts is supplemented by the selection of the Lipschitz constant with verification of the smoothness conditions used in the universal gradient descent method. This eliminates the disadvantages of the original method associated with the need to know these constants, which makes practical usage possible. Optimality of estimates for the complexity of the constructed algorithms is proved. To verify the results obtained, experiments on model functions and real tasks from machine learning are carried out.

  8. Puchinin S.M., Korolkov E.R., Stonyakin F.S., Alkousa M.S., Vyguzov A.A.
    Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 105-122

    In this paper, we consider two variants of the concept of sharp minimum for mathematical programming problems with quasiconvex objective function and inequality constraints. It investigated the problem of describing a variant of a simple subgradient method with switching along productive and non-productive steps, for which, on a class of problems with Lipschitz functions, it would be possible to guarantee convergence with the rate of geometric progression to the set of exact solutions or its vicinity. It is important that to implement the proposed method there is no need to know the sharp minimum parameter, which is usually difficult to estimate in practice. To overcome this problem, the authors propose to use a step adjustment procedure similar to that previously proposed by B. T. Polyak. However, in this case, in comparison with the class of problems without constraints, it arises the problem of knowing the exact minimal value of the objective function. The paper describes the conditions for the inexactness of this information, which make it possible to preserve convergence with the rate of geometric progression in the vicinity of the set of minimum points of the problem. Two analogs of the concept of a sharp minimum for problems with inequality constraints are considered. In the first one, the problem of approximation to the exact solution arises only to a pre-selected level of accuracy, for this, it is considered the case when the minimal value of the objective function is unknown; instead, it is given some approximation of this value. We describe conditions on the inexact minimal value of the objective function, under which convergence to the vicinity of the desired set of points with a rate of geometric progression is still preserved. The second considered variant of the sharp minimum does not depend on the desired accuracy of the problem. For this, we propose a slightly different way of checking whether the step is productive, which allows us to guarantee the convergence of the method to the exact solution with the rate of geometric progression in the case of exact information. Convergence estimates are proved under conditions of weak convexity of the constraints and some restrictions on the choice of the initial point, and a corollary is formulated for the convex case when the need for an additional assumption on the choice of the initial point disappears. For both approaches, it has been proven that the distance from the current point to the set of solutions decreases with increasing number of iterations. This, in particular, makes it possible to limit the requirements for the properties of the used functions (Lipschitz-continuous, sharp minimum) only for a bounded set. Some computational experiments are performed, including for the truss topology design problem.

  9. Ostroukhov P.A., Kamalov R.A., Dvurechensky P.E., Gasnikov A.V.
    Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 357-376

    In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.

    For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.

    Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our aforementioned algorithm with a global linear convergence and a local quadratic convergence rate.

    Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.

  10. Zabotin, V.I., Chernyshevskij P.A.
    Extension of Strongin’s Global Optimization Algorithm to a Function Continuous on a Compact Interval
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1111-1119

    The Lipschitz continuous property has been used for a long time to solve the global optimization problem and continues to be used. Here we can mention the work of Piyavskii, Yevtushenko, Strongin, Shubert, Sergeyev, Kvasov and others. Most papers assume a priori knowledge of the Lipschitz constant, but the derivation of this constant is a separate problem. Further still, we must prove that an objective function is really Lipschitz, and it is a complicated problem too. In the case where the Lipschitz continuity is established, Strongin proposed an algorithm for global optimization of a satisfying Lipschitz condition on a compact interval function without any a priori knowledge of the Lipschitz estimate. The algorithm not only finds a global extremum, but it determines the Lipschitz estimate too. It is known that every function that satisfies the Lipchitz condition on a compact convex set is uniformly continuous, but the reverse is not always true. However, there exist models (Arutyunova, Dulliev, Zabotin) whose study requires a minimization of the continuous but definitely not Lipschitz function. One of the algorithms for solving such a problem was proposed by R. J. Vanderbei. In his work he introduced some generalization of the Lipchitz property named $\varepsilon$-Lipchitz and proved that a function defined on a compact convex set is uniformly continuous if and only if it satisfies the $\varepsilon$-Lipchitz condition. The above-mentioned property allowed him to extend Piyavskii’s method. However, Vanderbei assumed that for a given value of $\varepsilon$ it is possible to obtain an associate Lipschitz $\varepsilon$-constant, which is a very difficult problem. Thus, there is a need to construct, for a function continuous on a compact convex domain, a global optimization algorithm which works in some way like Strongin’s algorithm, i.e., without any a priori knowledge of the Lipschitz $\varepsilon$-constant. In this paper we propose an extension of Strongin’s global optimization algorithm to a function continuous on a compact interval using the $\varepsilon$-Lipchitz conception, prove its convergence and solve some numerical examples using the software that implements the developed method.

Pages: next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"