All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
The error accumulation in the conjugate gradient method for degenerate problem
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 459-472In this paper, we consider the conjugate gradient method for solving the problem of minimizing a quadratic function with additive noise in the gradient. Three concepts of noise were considered: antagonistic noise in the linear term, stochastic noise in the linear term and noise in the quadratic term, as well as combinations of the first and second with the last. It was experimentally obtained that error accumulation is absent for any of the considered concepts, which differs from the folklore opinion that, as in accelerated methods, error accumulation must take place. The paper gives motivation for why the error may not accumulate. The dependence of the solution error both on the magnitude (scale) of the noise and on the size of the solution using the conjugate gradient method was also experimentally investigated. Hypotheses about the dependence of the error in the solution on the noise scale and the size (2-norm) of the solution are proposed and tested for all the concepts considered. It turned out that the error in the solution (by function) linearly depends on the noise scale. The work contains graphs illustrating each individual study, as well as a detailed description of numerical experiments, which includes an account of the methods of noise of both the vector and the matrix.
-
On numerical solution of joint inverse geophysical problems with structural constraints
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 329-343Inverse geophysical problems are difficult to solve due to their mathematically incorrect formulation and large computational complexity. Geophysical exploration in frontier areas is even more complicated due to the lack of reliable geological information. In this case, inversion methods that allow interpretation of several types of geophysical data together are recognized to be of major importance. This paper is dedicated to one of such inversion methods, which is based on minimization of the determinant of the Gram matrix for a set of model vectors. Within the framework of this approach, we minimize a nonlinear functional, which consists of squared norms of data residual of different types, the sum of stabilizing functionals and a term that measures the structural similarity between different model vectors. We apply this approach to seismic and electromagnetic synthetic data set. Specifically, we study joint inversion of acoustic pressure response together with controlled-source electrical field imposing structural constraints on resulting electrical conductivity and P-wave velocity distributions.
We start off this note with the problem formulation and present the numerical method for inverse problem. We implemented the conjugate-gradient algorithm for non-linear optimization. The efficiency of our approach is demonstrated in numerical experiments, in which the true 3D electrical conductivity model was assumed to be known, but the velocity model was constructed during inversion of seismic data. The true velocity model was based on a simplified geology structure of a marine prospect. Synthetic seismic data was used as an input for our minimization algorithm. The resulting velocity model not only fit to the data but also has structural similarity with the given conductivity model. Our tests have shown that optimally chosen weight of the Gramian term may improve resolution of the final models considerably.
-
A gradient method with inexact oracle for composite nonconvex optimization
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 321-334In this paper, we develop a new first-order method for composite nonconvex minimization problems with simple constraints and inexact oracle. The objective function is given as a sum of «hard», possibly nonconvex part, and «simple» convex part. Informally speaking, oracle inexactness means that, for the «hard» part, at any point we can approximately calculate the value of the function and construct a quadratic function, which approximately bounds this function from above. We give several examples of such inexactness: smooth nonconvex functions with inexact H¨older-continuous gradient, functions given by the auxiliary uniformly concave maximization problem, which can be solved only approximately. For the introduced class of problems, we propose a gradient-type method, which allows one to use a different proximal setup to adapt to the geometry of the feasible set, adaptively chooses controlled oracle error, allows for inexact proximal mapping. We provide a convergence rate for our method in terms of the norm of generalized gradient mapping and show that, in the case of an inexact Hölder-continuous gradient, our method is universal with respect to Hölder parameters of the problem. Finally, in a particular case, we show that the small value of the norm of generalized gradient mapping at a point means that a necessary condition of local minimum approximately holds at that point.
-
Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 357-376In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.
For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.
Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our aforementioned algorithm with a global linear convergence and a local quadratic convergence rate.
Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"