Результаты поиска по 'Newton’s methods of optimization':
Найдено статей: 12
  1. Sviridenko A.B.
    Designing a zero on a linear manifold, a polyhedron, and a vertex of a polyhedron. Newton methods of minimization
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 563-591

    We consider the approaches to the construction of methods for solving four-dimensional programming problems for calculating directions for multiple minimizations of smooth functions on a set of a given set of linear equalities. The approach consists of two stages.

    At the first stage, the problem of quadratic programming is transformed by a numerically stable direct multiplicative algorithm into an equivalent problem of designing the origin of coordinates on a linear manifold, which defines a new mathematical formulation of the dual quadratic problem. For this, a numerically stable direct multiplicative method for solving systems of linear equations is proposed, taking into account the sparsity of matrices presented in packaged form. The advantage of this approach is to calculate the modified Cholesky factors to construct a substantially positive definite matrix of the system of equations and its solution in the framework of one procedure. And also in the possibility of minimizing the filling of the main rows of multipliers without losing the accuracy of the results, and no changes are made in the position of the next processed row of the matrix, which allows the use of static data storage formats.

    At the second stage, the necessary and sufficient optimality conditions in the form of Kuhn–Tucker determine the calculation of the direction of descent — the solution of the dual quadratic problem is reduced to solving a system of linear equations with symmetric positive definite matrix for calculating of Lagrange's coefficients multipliers and to substituting the solution into the formula for calculating the direction of descent.

    It is proved that the proposed approach to the calculation of the direction of descent by numerically stable direct multiplicative methods at one iteration requires a cubic law less computation than one iteration compared to the well-known dual method of Gill and Murray. Besides, the proposed method allows the organization of the computational process from any starting point that the user chooses as the initial approximation of the solution.

    Variants of the problem of designing the origin of coordinates on a linear manifold, a convex polyhedron and a vertex of a convex polyhedron are presented. Also the relationship and implementation of methods for solving these problems are described.

    Views (last year): 6.
  2. Zelenkov G.A., Sviridenko A.B.
    Approach to development of algorithms of Newtonian methods of unconstrained optimization, their software implementation and benchmarking
    Computer Research and Modeling, 2013, v. 5, no. 3, pp. 367-377

    The approach to increase efficiency of Gill and Murray's algorithm of Newtonian methods of unconstrained optimization with step adjustment creation is offered, rests on Cholesky’s factorization. It is proved that the strategy of choice of the descent direction also determines the solution of the problem of scaling of steps at descent, and approximation by non-quadratic functions, and integration with a method of a confidential vicinity.

    Views (last year): 2. Citations: 7 (RSCI).
  3. Sviridenko A.B.
    Direct multiplicative methods for sparse matrices. Unbalanced linear systems.
    Computer Research and Modeling, 2016, v. 8, no. 6, pp. 833-860

    Small practical value of many numerical methods for solving single-ended systems of linear equations with ill-conditioned matrices due to the fact that these methods in the practice behave quite differently than in the case of precise calculations. Historically, sustainability is not enough attention was given, unlike in numerical algebra ‘medium-sized’, and emphasis is given to solving the problems of maximal order in data capabilities of the computer, including the expense of some loss of accuracy. Therefore, the main objects of study is the most appropriate storage of information contained in the sparse matrix; maintaining the highest degree of rarefaction at all stages of the computational process. Thus, the development of efficient numerical methods for solving unstable systems refers to the actual problems of computational mathematics.

    In this paper, the approach to the construction of numerically stable direct multiplier methods for solving systems of linear equations, taking into account sparseness of matrices, presented in packaged form. The advantage of the approach consists in minimization of filling the main lines of the multipliers without compromising accuracy of the results and changes in the position of the next processed row of the matrix are made that allows you to use static data storage formats. The storage format of sparse matrices has been studied and the advantage of this format consists in possibility of parallel execution any matrix operations without unboxing, which significantly reduces the execution time and memory footprint.

    Direct multiplier methods for solving systems of linear equations are best suited for solving problems of large size on a computer — sparse matrix systems allow you to get multipliers, the main row of which is also sparse, and the operation of multiplication of a vector-row of the multiplier according to the complexity proportional to the number of nonzero elements of this multiplier.

    As a direct continuation of this work is proposed in the basis for constructing a direct multiplier algorithm of linear programming to put a modification of the direct multiplier algorithm for solving systems of linear equations based on integration of technique of linear programming for methods to select the host item. Direct multiplicative methods of linear programming are best suited for the construction of a direct multiplicative algorithm set the direction of descent Newton methods in unconstrained optimization by integrating one of the existing design techniques significantly positive definite matrix of the second derivatives.

    Views (last year): 20. Citations: 2 (RSCI).
  4. Sviridenko A.B.
    Direct multiplicative methods for sparse matrices. Linear programming
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 143-165

    Multiplicative methods for sparse matrices are best suited to reduce the complexity of operations solving systems of linear equations performed on each iteration of the simplex method. The matrix of constraints in these problems of sparsely populated nonzero elements, which allows to obtain the multipliers, the main columns which are also sparse, and the operation of multiplication of a vector by a multiplier according to the complexity proportional to the number of nonzero elements of this multiplier. In addition, the transition to the adjacent basis multiplier representation quite easily corrected. To improve the efficiency of such methods requires a decrease in occupancy multiplicative representation of the nonzero elements. However, at each iteration of the algorithm to the sequence of multipliers added another. As the complexity of multiplication grows and linearly depends on the length of the sequence. So you want to run from time to time the recalculation of inverse matrix, getting it from the unit. Overall, however, the problem is not solved. In addition, the set of multipliers is a sequence of structures, and the size of this sequence is inconvenient is large and not precisely known. Multiplicative methods do not take into account the factors of the high degree of sparseness of the original matrices and constraints of equality, require the determination of initial basic feasible solution of the problem and, consequently, do not allow to reduce the dimensionality of a linear programming problem and the regular procedure of compression — dimensionality reduction of multipliers and exceptions of the nonzero elements from all the main columns of multipliers obtained in previous iterations. Thus, the development of numerical methods for the solution of linear programming problems, which allows to overcome or substantially reduce the shortcomings of the schemes implementation of the simplex method, refers to the current problems of computational mathematics.

    In this paper, the approach to the construction of numerically stable direct multiplier methods for solving problems in linear programming, taking into account sparseness of matrices, presented in packaged form. The advantage of the approach is to reduce dimensionality and minimize filling of the main rows of multipliers without compromising accuracy of the results and changes in the position of the next processed row of the matrix are made that allows you to use static data storage formats.

    As a direct continuation of this work is the basis for constructing a direct multiplicative algorithm set the direction of descent in the Newton methods for unconstrained optimization is proposed to put a modification of the direct multiplier method, linear programming by integrating one of the existing design techniques significantly positive definite matrix of the second derivatives.

    Views (last year): 10. Citations: 2 (RSCI).
  5. Gasnikov A.V., Gorbunov E.A., Kovalev D.A., Mohammed A.A., Chernousova E.O.
    The global rate of convergence for optimal tensor methods in smooth convex optimization
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 737-753

    In this work we consider Monteiro – Svaiter accelerated hybrid proximal extragradient (A-HPE) framework and accelerated Newton proximal extragradient (A-NPE) framework. The last framework contains an optimal method for rather smooth convex optimization problems with second-order oracle. We generalize A-NPE framework for higher order derivative oracle (schemes). We replace Newton’s type step in A-NPE that was used for auxiliary problem by Newton’s regularized (tensor) type step (Yu. Nesterov, 2018). Moreover we generalize large step A-HPE/A-NPE framework by replacing Monteiro – Svaiter’s large step condition so that this framework could work for high-order schemes. The main contribution of the paper is as follows: we propose optimal highorder methods for convex optimization problems. As far as we know for that moment there exist only zero, first and second order optimal methods that work according to the lower bounds. For higher order schemes there exists a gap between the lower bounds (Arjevani, Shamir, Shiff, 2017) and existing high-order (tensor) methods (Nesterov – Polyak, 2006; Yu.Nesterov, 2008; M. Baes, 2009; Yu.Nesterov, 2018). Asymptotically the ratio of the rates of convergences for the best existing methods and lower bounds is about 1.5. In this work we eliminate this gap and show that lower bounds are tight. We also consider rather smooth strongly convex optimization problems and show how to generalize the proposed methods to this case. The basic idea is to use restart technique until iteration sequence reach the region of quadratic convergence of Newton method and then use Newton method. One can show that the considered method converges with optimal rates up to a logarithmic factor. Note, that proposed in this work technique can be generalized in the case when we can’t solve auxiliary problem exactly, moreover we can’t even calculate the derivatives of the functional exactly. Moreover, the proposed technique can be generalized to the composite optimization problems and in particular to the constraint convex optimization problems. We also formulate a list of open questions that arise around the main result of this paper (optimal universal method of high order e.t.c.).

    Views (last year): 75.
  6. Batgerel B., Nikonov E.G., Puzynin I.V.
    Procedure for constructing of explicit, implicit and symmetric simplectic schemes for numerical solving of Hamiltonian systems of equations
    Computer Research and Modeling, 2016, v. 8, no. 6, pp. 861-871

    Equations of motion in Newtonian and Hamiltonian forms are used for classical molecular dynamics simulation of particle system time evolution. When Newton equations of motion are used for finding of particle coordinates and velocities in $N$-particle system it takes to solve $3N$ ordinary differential equations of second order at every time step. Traditionally numerical schemes of Verlet method are used for solving Newtonian equations of motion of molecular dynamics. A step of integration is necessary to decrease for Verlet numerical schemes steadiness conservation on sufficiently large time intervals. It leads to a significant increase of the volume of calculations. Numerical schemes of Verlet method with Hamiltonian conservation control (the energy of the system) at every time moment are used in the most software packages of molecular dynamics for numerical integration of equations of motion. It can be used two complement each other approaches to decrease of computational time in molecular dynamics calculations. The first of these approaches is based on enhancement and software optimization of existing software packages of molecular dynamics by using of vectorization, parallelization and special processor construction. The second one is based on the elaboration of efficient methods for numerical integration for equations of motion. A procedure for constructing of explicit, implicit and symmetric symplectic numerical schemes with given approximation accuracy in relation to integration step for solving of molecular dynamic equations of motion in Hamiltonian form is proposed in this work. The approach for construction of proposed in this work procedure is based on the following points: Hamiltonian formulation of equations of motion; usage of Taylor expansion of exact solution; usage of generating functions, for geometrical properties of exact solution conservation, in derivation of numerical schemes. Numerical experiments show that obtained in this work symmetric symplectic third-order accuracy scheme conserves basic properties of the exact solution in the approximate solution. It is more stable for approximation step and conserves Hamiltonian of the system with more accuracy at a large integration interval then second order Verlet numerical schemes.

    Views (last year): 11.
  7. Sviridenko A.B.
    Direct multiplicative methods for sparse matrices. Newton methods
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 679-703

    We consider a numerically stable direct multiplicative algorithm of solving linear equations systems, which takes into account the sparseness of matrices presented in a packed form. The advantage of the algorithm is the ability to minimize the filling of the main rows of multipliers without losing the accuracy of the results. Moreover, changes in the position of the next processed row of the matrix are not made, what allows using static data storage formats. Linear system solving by a direct multiplicative algorithm is, like the solving with $LU$-decomposition, just another scheme of the Gaussian elimination method implementation.

    In this paper, this algorithm is the basis for solving the following problems:

    Problem 1. Setting the descent direction in Newtonian methods of unconditional optimization by integrating one of the known techniques of constructing an essentially positive definite matrix. This approach allows us to weaken or remove additional specific difficulties caused by the need to solve large equation systems with sparse matrices presented in a packed form.

    Problem 2. Construction of a new mathematical formulation of the problem of quadratic programming and a new form of specifying necessary and sufficient optimality conditions. They are quite simple and can be used to construct mathematical programming methods, for example, to find the minimum of a quadratic function on a polyhedral set of constraints, based on solving linear equations systems, which dimension is not higher than the number of variables of the objective function.

    Problem 3. Construction of a continuous analogue of the problem of minimizing a real quadratic polynomial in Boolean variables and a new form of defining necessary and sufficient conditions of optimality for the development of methods for solving them in polynomial time. As a result, the original problem is reduced to the problem of finding the minimum distance between the origin and the angular point of a convex polyhedron, which is a perturbation of the $n$-dimensional cube and is described by a system of double linear inequalities with an upper triangular matrix of coefficients with units on the main diagonal. Only two faces are subject to investigation, one of which or both contains the vertices closest to the origin. To calculate them, it is sufficient to solve $4n – 4$ linear equations systems and choose among them all the nearest equidistant vertices in polynomial time. The problem of minimizing a quadratic polynomial is $NP$-hard, since an $NP$-hard problem about a vertex covering for an arbitrary graph comes down to it. It follows therefrom that $P = NP$, which is based on the development beyond the limits of integer optimization methods.

    Views (last year): 7. Citations: 1 (RSCI).
  8. Khudhur H.M., Halil I.H.
    Noise removal from images using the proposed three-term conjugate gradient algorithm
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 841-853

    Conjugate gradient algorithms represent an important class of unconstrained optimization algorithms with strong local and global convergence properties and simple memory requirements. These algorithms have advantages that place them between the steep regression method and Newton’s algorithm because they require calculating the first derivatives only and do not require calculating and storing the second derivatives that Newton’s algorithm needs. They are also faster than the steep descent algorithm, meaning that they have overcome the slow convergence of this algorithm, and it does not need to calculate the Hessian matrix or any of its approximations, so it is widely used in optimization applications. This study proposes a novel method for image restoration by fusing the convex combination method with the hybrid (CG) method to create a hybrid three-term (CG) algorithm. Combining the features of both the Fletcher and Revees (FR) conjugate parameter and the hybrid Fletcher and Revees (FR), we get the search direction conjugate parameter. The search direction is the result of concatenating the gradient direction, the previous search direction, and the gradient from the previous iteration. We have shown that the new algorithm possesses the properties of global convergence and descent when using an inexact search line, relying on the standard Wolfe conditions, and using some assumptions. To guarantee the effectiveness of the suggested algorithm and processing image restoration problems. The numerical results of the new algorithm show high efficiency and accuracy in image restoration and speed of convergence when used in image restoration problems compared to Fletcher and Revees (FR) and three-term Fletcher and Revees (TTFR).

  9. Sviridenko A.B.
    The correction to Newton's methods of optimization
    Computer Research and Modeling, 2015, v. 7, no. 4, pp. 835-863

    An approach to the decrease of norm of the correction in Newton’s methods of optimization, based on the Cholesky’s factorization is presented, which is based on the integration with the technique of the choice of leading element of algorithm of linear programming as a method of solving the system of equations. We investigate the issues of increasing of the numerical stability of the Cholesky’s decomposition and the Gauss’ method of exception.

    Views (last year): 1. Citations: 6 (RSCI).
  10. Gasnikov A.V., Kovalev D.A.
    A hypothesis about the rate of global convergence for optimal methods (Newton’s type) in smooth convex optimization
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 305-314

    In this paper we discuss lower bounds for convergence of convex optimization methods of high order and attainability of this bounds. We formulate a hypothesis that covers all the cases. It is noticeable that we provide this statement without a proof. Newton method is the most famous method that uses gradient and Hessian of optimized function. However, it converges locally even for strongly convex functions. Global convergence can be achieved with cubic regularization of Newton method [Nesterov, Polyak, 2006], whose iteration cost is comparable with iteration cost of Newton method and is equivalent to inversion of Hessian of optimized function. Yu.Nesterov proposed accelerated variant of Newton method with cubic regularization in 2008 [Nesterov, 2008]. R.Monteiro and B. Svaiter managed to improve global convergence of cubic regularized method in 2013 [Monteiro, Svaiter, 2013]. Y.Arjevani, O. Shamir and R. Shiff showed that convergence bound of Monteiro and Svaiter is optimal (cannot be improved by more than logarithmic factor with any second order method) in 2017 [Arjevani et al., 2017]. They also managed to find bounds for convex optimization methods of p-th order for $p ≥ 2$. However, they got bounds only for first and second order methods for strongly convex functions. In 2018 Yu.Nesterov proposed third order convex optimization methods with rate of convergence that is close to this lower bounds and with similar to Newton method cost of iteration [Nesterov, 2018]. Consequently, it was showed that high order methods can be practical. In this paper we formulate lower bounds for p-th order methods for $p ≥ 3$ for strongly convex unconstrained optimization problems. This paper can be viewed as a little survey of state of the art of high order optimization methods.

    Views (last year): 21. Citations: 1 (RSCI).
Pages: next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"