Результаты поиска по 'convergence of method':
Найдено статей: 80
  1. Pasechnyuk D.A., Stonyakin F.S.
    One method for minimization a convex Lipschitz-continuous function of two variables on a fixed square
    Computer Research and Modeling, 2019, v. 11, no. 3, pp. 379-395

    In the article we have obtained some estimates of the rate of convergence for the recently proposed by Yu. E.Nesterov method of minimization of a convex Lipschitz-continuous function of two variables on a square with a fixed side. The idea of the method is to divide the square into smaller parts and gradually remove them so that in the remaining sufficiently small part. The method consists in solving auxiliary problems of one-dimensional minimization along the separating segments and does not imply the calculation of the exact value of the gradient of the objective functional. The main result of the paper is proved in the class of smooth convex functions having a Lipschitz-continuous gradient. Moreover, it is noted that the property of Lipschitzcontinuity for gradient is sufficient to require not on the whole square, but only on some segments. It is shown that the method can work in the presence of errors in solving auxiliary one-dimensional problems, as well as in calculating the direction of gradients. Also we describe the situation when it is possible to neglect or reduce the time spent on solving auxiliary one-dimensional problems. For some examples, experiments have demonstrated that the method can work effectively on some classes of non-smooth functions. In this case, an example of a simple non-smooth function is constructed, for which, if the subgradient is chosen incorrectly, even if the auxiliary one-dimensional problem is exactly solved, the convergence property of the method may not hold. Experiments have shown that the method under consideration can achieve the desired accuracy of solving the problem in less time than the other methods (gradient descent and ellipsoid method) considered. Partially, it is noted that with an increase in the accuracy of the desired solution, the operating time for the Yu. E. Nesterov’s method can grow slower than the time of the ellipsoid method.

    Views (last year): 34.
  2. We present the iterative algorithm that solves numerically both Urysohn type Fredholm and Volterra nonlinear one-dimensional nonsingular integral equations of the second kind to a specified, modest user-defined accuracy. The algorithm is based on descending recursive sequence of quadratures. Convergence of numerical scheme is guaranteed by fixed-point theorems. Picard’s method of integrating successive approximations is of great importance for the existence theory of integral equations but surprisingly very little appears on numerical algorithms for its direct implementation in the literature. We show that successive approximations method can be readily employed in numerical solution of integral equations. By that the quadrature algorithm is thoroughly designed. It is based on the explicit form of fifth-order embedded Runge–Kutta rule with adaptive step-size self-control. Since local error estimates may be cheaply obtained, continuous monitoring of the quadrature makes it possible to create very accurate automatic numerical schemes and to reduce considerably the main drawback of Picard iterations namely the extremely large amount of computations with increasing recursion depth. Our algorithm is organized so that as compared to most approaches the nonlinearity of integral equations does not induce any additional computational difficulties, it is very simple to apply and to make a program realization. Our algorithm exhibits some features of universality. First, it should be stressed that the method is as easy to apply to nonlinear as to linear equations of both Fredholm and Volterra kind. Second, the algorithm is equipped by stopping rules by which the calculations may to considerable extent be controlled automatically. A compact C++-code of described algorithm is presented. Our program realization is self-consistent: it demands no preliminary calculations, no external libraries and no additional memory is needed. Numerical examples are provided to show applicability, efficiency, robustness and accuracy of our approach.

  3. Umnov A.E., Umnov E.A.
    Using feedback functions to solve parametric programming problems
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1125-1151

    We consider a finite-dimensional optimization problem, the formulation of which in addition to the required variables contains parameters. The solution to this problem is a dependence of optimal values of variables on parameters. In general, these dependencies are not functions because they can have ambiguous meanings and in the functional case be nondifferentiable. In addition, their domain of definition may be narrower than the domains of definition of functions in the condition of the original problem. All these properties make it difficult to solve both the original parametric problem and other tasks, the statement of which includes these dependencies. To overcome these difficulties, usually methods such as non-differentiable optimization are used.

    This article proposes an alternative approach that makes it possible to obtain solutions to parametric problems in a form devoid of the specified properties. It is shown that such representations can be explored using standard algorithms, based on the Taylor formula. This form is a function smoothly approximating the solution of the original problem for any parameter values, specified in its statement. In this case, the value of the approximation error is controlled by a special parameter. Construction of proposed approximations is performed using special functions that establish feedback (within optimality conditions for the original problem) between variables and Lagrange multipliers. This method is described for linear problems with subsequent generalization to the nonlinear case.

    From a computational point of view the construction of the approximation consists in finding the saddle point of the modified Lagrange function of the original problem. Moreover, this modification is performed in a special way using feedback functions. It is shown that the necessary conditions for the existence of such a saddle point are similar to the conditions of the Karush – Kuhn – Tucker theorem, but do not contain constraints such as inequalities and conditions of complementary slackness. Necessary conditions for the existence of a saddle point determine this approximation implicitly. Therefore, to calculate its differential characteristics, the implicit function theorem is used. The same theorem is used to reduce the approximation error to an acceptable level.

    Features of the practical implementation feedback function method, including estimates of the rate of convergence to the exact solution are demonstrated for several specific classes of parametric optimization problems. Specifically, tasks searching for the global extremum of functions of many variables and the problem of multiple extremum (maximin-minimax) are considered. Optimization problems that arise when using multicriteria mathematical models are also considered. For each of these classes, there are demo examples.

  4. Aristova E.N., Karavaeva N.I.
    Bicompact schemes for the HOLO algorithm for joint solution of the transport equation and the energy equation
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1429-1448

    The numerical solving of the system of high-temperature radiative gas dynamics (HTRGD) equations is a computationally laborious task, since the interaction of radiation with matter is nonlinear and non-local. The radiation absorption coefficients depend on temperature, and the temperature field is determined by both gas-dynamic processes and radiation transport. The method of splitting into physical processes is usually used to solve the HTRGD system, one of the blocks consists of a joint solving of the radiative transport equation and the energy balance equation of matter under known pressure and temperature fields. Usually difference schemes with orders of convergence no higher than the second are used to solve this block. Due to computer memory limitations it is necessary to use not too detailed grids to solve complex technical problems. This increases the requirements for the order of approximation of difference schemes. In this work, bicompact schemes of a high order of approximation for the algorithm for the joint solution of the radiative transport equation and the energy balance equation are implemented for the first time. The proposed method can be applied to solve a wide range of practical problems, as it has high accuracy and it is suitable for solving problems with coefficient discontinuities. The non-linearity of the problem and the use of an implicit scheme lead to an iterative process that may slowly converge. In this paper, we use a multiplicative HOLO algorithm named the quasi-diffusion method by V.Ya.Goldin. The key idea of HOLO algorithms is the joint solving of high order (HO) and low order (LO) equations. The high-order equation (HO) is the radiative transport equation solved in the energy multigroup approximation, the system of quasi-diffusion equations in the multigroup approximation (LO1) is obtained by averaging HO equations over the angular variable. The next step is averaging over energy, resulting in an effective one-group system of quasi-diffusion equations (LO2), which is solved jointly with the energy equation. The solutions obtained at each stage of the HOLO algorithm are closely related that ultimately leads to an acceleration of the convergence of the iterative process. Difference schemes constructed by the method of lines within one cell are proposed for each of the stages of the HOLO algorithm. The schemes have the fourth order of approximation in space and the third order of approximation in time. Schemes for the transport equation were developed by B.V. Rogov and his colleagues, the schemes for the LO1 and LO2 equations were developed by the authors. An analytical test is constructed to demonstrate the declared orders of convergence. Various options for setting boundary conditions are considered and their influence on the order of convergence in time and space is studied.

  5. Khudhur H.M., Halil I.H.
    Noise removal from images using the proposed three-term conjugate gradient algorithm
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 841-853

    Conjugate gradient algorithms represent an important class of unconstrained optimization algorithms with strong local and global convergence properties and simple memory requirements. These algorithms have advantages that place them between the steep regression method and Newton’s algorithm because they require calculating the first derivatives only and do not require calculating and storing the second derivatives that Newton’s algorithm needs. They are also faster than the steep descent algorithm, meaning that they have overcome the slow convergence of this algorithm, and it does not need to calculate the Hessian matrix or any of its approximations, so it is widely used in optimization applications. This study proposes a novel method for image restoration by fusing the convex combination method with the hybrid (CG) method to create a hybrid three-term (CG) algorithm. Combining the features of both the Fletcher and Revees (FR) conjugate parameter and the hybrid Fletcher and Revees (FR), we get the search direction conjugate parameter. The search direction is the result of concatenating the gradient direction, the previous search direction, and the gradient from the previous iteration. We have shown that the new algorithm possesses the properties of global convergence and descent when using an inexact search line, relying on the standard Wolfe conditions, and using some assumptions. To guarantee the effectiveness of the suggested algorithm and processing image restoration problems. The numerical results of the new algorithm show high efficiency and accuracy in image restoration and speed of convergence when used in image restoration problems compared to Fletcher and Revees (FR) and three-term Fletcher and Revees (TTFR).

  6. Aristova E.N., Astafurov G.O., Shilkov A.V.
    Calculation of radiation in shockwave layer of a space vehicle taking into account details of photon spectrum
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 579-594

    Calculations of radiation transport in the shockwave layer of a descent space vehicle cause essential difficulties due to complex multi-resonance dependence of the absorption macroscopic cross sections from the photon energy. The convergence of two approximate spectrum averaging methods to the results of exact pointwise spectrum calculations is investigated. The first one is the well known multigroup method, the second one is the Lebesgue averaging method belonging to methods of the reduction of calculation points by means of aggregation of spectral points which are characterized by equal absorption strength. It is shown that convergence of the Lebesgue averaging method is significantly faster than the multigroup approach as the number of groups is increased. The only 100–150 Lebesgue groups are required to achieve the accuracy of pointwise calculations even in the shock layer at upper atmosphere with sharp absorption lines. At the same time the number of calculations is reduced by more than four order. Series of calculations of the radiation distribution function in 2D shock layer around a sphere and a blunt cone were performed using the local flat layer approximation and the Lebesgue averaging method. It is shown that the shock wave radiation becomes more significant both in value of the energy flux incident on the body surface and in the rate of energy exchange with the gas-dynamic flow in the case of increasing of the vehicle’s size.

    Views (last year): 8. Citations: 1 (RSCI).
  7. Gasnikov A.V., Kovalev D.A.
    A hypothesis about the rate of global convergence for optimal methods (Newton’s type) in smooth convex optimization
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 305-314

    In this paper we discuss lower bounds for convergence of convex optimization methods of high order and attainability of this bounds. We formulate a hypothesis that covers all the cases. It is noticeable that we provide this statement without a proof. Newton method is the most famous method that uses gradient and Hessian of optimized function. However, it converges locally even for strongly convex functions. Global convergence can be achieved with cubic regularization of Newton method [Nesterov, Polyak, 2006], whose iteration cost is comparable with iteration cost of Newton method and is equivalent to inversion of Hessian of optimized function. Yu.Nesterov proposed accelerated variant of Newton method with cubic regularization in 2008 [Nesterov, 2008]. R.Monteiro and B. Svaiter managed to improve global convergence of cubic regularized method in 2013 [Monteiro, Svaiter, 2013]. Y.Arjevani, O. Shamir and R. Shiff showed that convergence bound of Monteiro and Svaiter is optimal (cannot be improved by more than logarithmic factor with any second order method) in 2017 [Arjevani et al., 2017]. They also managed to find bounds for convex optimization methods of p-th order for $p ≥ 2$. However, they got bounds only for first and second order methods for strongly convex functions. In 2018 Yu.Nesterov proposed third order convex optimization methods with rate of convergence that is close to this lower bounds and with similar to Newton method cost of iteration [Nesterov, 2018]. Consequently, it was showed that high order methods can be practical. In this paper we formulate lower bounds for p-th order methods for $p ≥ 3$ for strongly convex unconstrained optimization problems. This paper can be viewed as a little survey of state of the art of high order optimization methods.

    Views (last year): 21. Citations: 1 (RSCI).
  8. Rukavishnikov V.A., Mosolapov A.O.
    Weighthed vector finite element method and its applications
    Computer Research and Modeling, 2019, v. 11, no. 1, pp. 71-86

    Mathematical models of many natural processes are described by partial differential equations with singular solutions. Classical numerical methods for determination of approximate solution to such problems are inefficient. In the present paper a boundary value problem for vector wave equation in L-shaped domain is considered. The presence of reentrant corner of size $3\pi/2$ on the boundary of computational domain leads to the strong singularity of the solution, i.e. it does not belong to the Sobolev space $H^1$ so classical and special numerical methods have a convergence rate less than $O(h)$. Therefore in the present paper a special weighted set of vector-functions is introduced. In this set the solution of considered boundary value problem is defined as $R_ν$-generalized one.

    For numerical determination of the $R_ν$-generalized solution a weighted vector finite element method is constructed. The basic difference of this method is that the basis functions contain as a factor a special weight function in a degree depending on the properties of the solution of initial problem. This allows to significantly raise a convergence speed of approximate solution to the exact one when the mesh is refined. Moreover, introduced basis functions are solenoidal, therefore the solenoidal condition for the solution is taken into account precisely, so the spurious numerical solutions are prevented.

    Results of numerical experiments are presented for series of different type model problems: some of them have a solution containing only singular component and some of them have a solution containing a singular and regular components. Results of numerical experiment showed that when a finite element mesh is refined a convergence rate of the constructed weighted vector finite element method is $O(h)$, that is more than one and a half times better in comparison with special methods developed for described problem, namely singular complement method and regularization method. Another features of constructed method are algorithmic simplicity and naturalness of the solution determination that is beneficial for numerical computations.

    Views (last year): 37.
  9. Stonyakin F.S., Stepanov A.N., Gasnikov A.V., Titov A.A.
    Mirror descent for constrained optimization problems with large subgradient values of functional constraints
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 301-317

    The paper is devoted to the problem of minimization of the non-smooth functional $f$ with a non-positive non-smooth Lipschitz-continuous functional constraint. We consider the formulation of the problem in the case of quasi-convex functionals. We propose new strategies of step-sizes and adaptive stopping rules in Mirror Descent for the considered class of problems. It is shown that the methods are applicable to the objective functionals of various levels of smoothness. Applying a special restart technique to the considered version of Mirror Descent there was proposed an optimal method for optimization problems with strongly convex objective functionals. Estimates of the rate of convergence for the considered methods are obtained depending on the level of smoothness of the objective functional. These estimates indicate the optimality of the considered methods from the point of view of the theory of lower oracle bounds. In particular, the optimality of our approach for Höldercontinuous quasi-convex (sub)differentiable objective functionals is proved. In addition, the case of a quasiconvex objective functional and functional constraint was considered. In this paper, we consider the problem of minimizing a non-smooth functional $f$ in the presence of a Lipschitz-continuous non-positive non-smooth functional constraint $g$, and the problem statement in the cases of quasi-convex and strongly (quasi-)convex functionals is considered separately. The paper presents numerical experiments demonstrating the advantages of using the considered methods.

  10. Rukavishnikov V.A., Rukavishnikov A.V.

    The method of numerical solution of the one stationary hydrodynamics problem in convective form in $L$-shaped domain
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1291-1306

    An essential class of problems describes physical processes occurring in non-convex domains containing a corner greater than 180 degrees on the boundary. The solution in a neighborhood of a corner is singular and its finding using classical approaches entails a loss of accuracy. In the paper, we consider stationary, linearized by Picard’s iterations, Navier – Stokes equations governing the flow of a incompressible viscous fluid in the convection form in $L$-shaped domain. An $R_\nu$-generalized solution of the problem in special sets of weighted spaces is defined. A special finite element method to find an approximate $R_\nu$-generalized solution is constructed. Firstly, functions of the finite element spaces satisfy the law of conservation of mass in the strong sense, i.e. at the grid nodes. For this purpose, Scott – Vogelius element pair is used. The fulfillment of the condition of mass conservation leads to the finding more accurate, from a physical point of view, solution. Secondly, basis functions of the finite element spaces are supplemented by weight functions. The degree of the weight function, as well as the parameter $\nu$ in the definition of an $R_\nu$-generalized solution, and a radius of a neighborhood of the singularity point are free parameters of the method. A specially selected combination of them leads to an increase almost twice in the order of convergence rate of an approximate solution to the exact one in relation to the classical approaches. The convergence rate reaches the first order by the grid step in the norms of Sobolev weight spaces. Thus, numerically shown that the convergence rate does not depend on the corner value.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"