Результаты поиска по 'smoothing':
Найдено статей: 58
  1. Postnikov E.B.
    Wavelet transform with the Morlet wavelet: Calculation methods based on a solution of diffusion equations
    Computer Research and Modeling, 2009, v. 1, no. 1, pp. 5-12

    Two algorithms of evaluation of the continuous wavelet transform with the Morlet wavelet are presented. The first one is the solution of PDE with transformed signal, which plays a role of the initial value. The second allows to explore the influence of central frequency variation via the diffusion smoothing of the data modulated by the harmonic functions. These approaches are illustrated by the analysis of the chaotic oscillations of the coupled Roessler systems.

    Views (last year): 5. Citations: 3 (RSCI).
  2. Fedosova A.N., Silaev D.A.
    Mathematical modeling of bending of a circular plate using $S$-splines
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 977-988

    This article is dedicated to the use of higher degree $S$-splines for solving equations of the elasticity theory. As an example we consider the solution to the equation of bending of a plate on a circle. $S$-spline is a piecewise-polynomial function. Its coefficients are defined by two conditions. The first part of the coefficients are defined by the smoothness of the spline. The rest are determined using the least-squares method. We consider class $C^4$ 7th degree $S$-splines.

    Views (last year): 4.
  3. Sviridenko A.B.
    Designing a zero on a linear manifold, a polyhedron, and a vertex of a polyhedron. Newton methods of minimization
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 563-591

    We consider the approaches to the construction of methods for solving four-dimensional programming problems for calculating directions for multiple minimizations of smooth functions on a set of a given set of linear equalities. The approach consists of two stages.

    At the first stage, the problem of quadratic programming is transformed by a numerically stable direct multiplicative algorithm into an equivalent problem of designing the origin of coordinates on a linear manifold, which defines a new mathematical formulation of the dual quadratic problem. For this, a numerically stable direct multiplicative method for solving systems of linear equations is proposed, taking into account the sparsity of matrices presented in packaged form. The advantage of this approach is to calculate the modified Cholesky factors to construct a substantially positive definite matrix of the system of equations and its solution in the framework of one procedure. And also in the possibility of minimizing the filling of the main rows of multipliers without losing the accuracy of the results, and no changes are made in the position of the next processed row of the matrix, which allows the use of static data storage formats.

    At the second stage, the necessary and sufficient optimality conditions in the form of Kuhn–Tucker determine the calculation of the direction of descent — the solution of the dual quadratic problem is reduced to solving a system of linear equations with symmetric positive definite matrix for calculating of Lagrange's coefficients multipliers and to substituting the solution into the formula for calculating the direction of descent.

    It is proved that the proposed approach to the calculation of the direction of descent by numerically stable direct multiplicative methods at one iteration requires a cubic law less computation than one iteration compared to the well-known dual method of Gill and Murray. Besides, the proposed method allows the organization of the computational process from any starting point that the user chooses as the initial approximation of the solution.

    Variants of the problem of designing the origin of coordinates on a linear manifold, a convex polyhedron and a vertex of a convex polyhedron are presented. Also the relationship and implementation of methods for solving these problems are described.

    Views (last year): 6.
  4. Currently, different nonlinear numerical schemes of the spatial approximation are used in numerical simulation of boundary value problems for hyperbolic systems of partial differential equations (e. g. gas dynamics equations, MHD, deformable rigid body, etc.). This is due to the need to improve the order of accuracy and perform simulation of discontinuous solutions that are often occurring in such systems. The need for non-linear schemes is followed from the barrier theorem of S. K. Godunov that states the impossibility of constructing a linear scheme for monotone approximation of such equations with approximation order two or greater. One of the most accurate non-linear type schemes are ENO (essentially non oscillating) and their modifications, including WENO (weighted, essentially non oscillating) scemes. The last received the most widespread, since the same stencil width has a higher order of approximation than the ENO scheme. The benefit of ENO and WENO schemes is the ability to maintain a high-order approximation to the areas of non-monotonic solutions. The main difficulty of the analysis of such schemes comes from the fact that they themselves are nonlinear and are used to approximate the nonlinear equations. In particular, the linear stability condition was obtained earlier only for WENO5 scheme (fifth-order approximation on smooth solutions) and it is a numerical one. In this paper we consider the problem of construction and stability for WENO5, WENO7, WENO9, WENO11, and WENO13 finite volume schemes for the Hopf equation. In the first part of this article we discuss WENO methods in general, and give the explicit expressions for the coefficients of the polynomial weights and linear combinations required to build these schemes. We prove a series of assertions that can make conclusions about the order of approximation depending on the type of local solutions. Stability analysis is carried out on the basis of the principle of frozen coefficients. The cases of a smooth and discontinuous behavior of solutions in the field of linearization with frozen coefficients on the faces of the final volume and spectra of the schemes are analyzed for these cases. We prove the linear stability conditions for a variety of Runge-Kutta methods applied to WENO schemes. As a result, our research provides guidance on choosing the best possible stability parameter, which has the smallest effect on the nonlinear properties of the schemes. The convergence of the schemes is followed from the analysis.

    Views (last year): 9. Citations: 1 (RSCI).
  5. Gasnikov A.V., Gorbunov E.A., Kovalev D.A., Mohammed A.A., Chernousova E.O.
    The global rate of convergence for optimal tensor methods in smooth convex optimization
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 737-753

    In this work we consider Monteiro – Svaiter accelerated hybrid proximal extragradient (A-HPE) framework and accelerated Newton proximal extragradient (A-NPE) framework. The last framework contains an optimal method for rather smooth convex optimization problems with second-order oracle. We generalize A-NPE framework for higher order derivative oracle (schemes). We replace Newton’s type step in A-NPE that was used for auxiliary problem by Newton’s regularized (tensor) type step (Yu. Nesterov, 2018). Moreover we generalize large step A-HPE/A-NPE framework by replacing Monteiro – Svaiter’s large step condition so that this framework could work for high-order schemes. The main contribution of the paper is as follows: we propose optimal highorder methods for convex optimization problems. As far as we know for that moment there exist only zero, first and second order optimal methods that work according to the lower bounds. For higher order schemes there exists a gap between the lower bounds (Arjevani, Shamir, Shiff, 2017) and existing high-order (tensor) methods (Nesterov – Polyak, 2006; Yu.Nesterov, 2008; M. Baes, 2009; Yu.Nesterov, 2018). Asymptotically the ratio of the rates of convergences for the best existing methods and lower bounds is about 1.5. In this work we eliminate this gap and show that lower bounds are tight. We also consider rather smooth strongly convex optimization problems and show how to generalize the proposed methods to this case. The basic idea is to use restart technique until iteration sequence reach the region of quadratic convergence of Newton method and then use Newton method. One can show that the considered method converges with optimal rates up to a logarithmic factor. Note, that proposed in this work technique can be generalized in the case when we can’t solve auxiliary problem exactly, moreover we can’t even calculate the derivatives of the functional exactly. Moreover, the proposed technique can be generalized to the composite optimization problems and in particular to the constraint convex optimization problems. We also formulate a list of open questions that arise around the main result of this paper (optimal universal method of high order e.t.c.).

    Views (last year): 75.
  6. Alkousa M.S.
    On some stochastic mirror descent methods for constrained online optimization problems
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 205-217

    The problem of online convex optimization naturally occurs in cases when there is an update of statistical information. The mirror descent method is well known for non-smooth optimization problems. Mirror descent is an extension of the subgradient method for solving non-smooth convex optimization problems in the case of a non-Euclidean distance. This paper is devoted to a stochastic variant of recently proposed Mirror Descent methods for convex online optimization problems with convex Lipschitz (generally, non-smooth) functional constraints. This means that we can still use the value of the functional constraint, but instead of (sub)gradient of the objective functional and the functional constraint, we use their stochastic (sub)gradients. More precisely, assume that on a closed subset of $n$-dimensional vector space, $N$ convex Lipschitz non-smooth functionals are given. The problem is to minimize the arithmetic mean of these functionals with a convex Lipschitz constraint. Two methods are proposed, for solving this problem, using stochastic (sub)gradients: adaptive method (does not require knowledge of Lipschitz constant neither for the objective functional, nor for the functional of constraint) and non-adaptivemethod (requires knowledge of Lipschitz constant for the objective functional and the functional of constraint). Note that it is allowed to calculate the stochastic (sub)gradient of each functional only once. In the case of non-negative regret, we find that the number of non-productive steps is $O$($N$), which indicates the optimality of the proposed methods. We consider an arbitrary proximal structure, which is essential for decisionmaking problems. The results of numerical experiments are presented, allowing to compare the work of adaptive and non-adaptive methods for some examples. It is shown that the adaptive method can significantly improve the number of the found solutions.

    Views (last year): 42.
  7. Rovenska O.G.
    Approximation of analytic functions by repeated de la Vallee Poussin sums
    Computer Research and Modeling, 2019, v. 11, no. 3, pp. 367-377

    The paper deals with the problems of approximation of periodic functions of high smoothness by arithmetic means of Fourier sums. The simplest and natural example of a linear process of approximation of continuous periodic functions of a real variable is the approximation of these functions by partial sums of the Fourier series. However, the sequences of partial Fourier sums are not uniformly convergent over the entire class of continuous $2\pi$-periodic functions. In connection with this, a significant number of papers is devoted to the study of the approximative properties of other approximation methods, which are generated by certain transformations of the partial sums of Fourier series and allow us to construct sequences of trigonometrical polynomials that would be uniformly convergent for each function $f \in C$. In particular, over the past decades, de la Vallee Poussin sums and Fejer sums have been widely studied. One of the most important directions in this field is the study of the asymptotic behavior of upper bounds of deviations of arithmetic means of Fourier sums on different classes of periodic functions. Methods of investigation of integral representations of deviations of polynomials on the classes of periodic differentiable functions of real variable originated and received its development through the works of S.M. Nikol’sky, S.B. Stechkin, N.P. Korneichuk, V.K. Dzadyk, etc.

    The aim of the work systematizes known results related to the approximation of classes of periodic functions of high smoothness by arithmetic means of Fourier sums, and presents new facts obtained for particular cases. In the paper is studied the approximative properties of $r$-repeated de la Vallee Poussin sums on the classes of periodic functions that can be regularly extended into the fixed strip of the complex plane. We obtain asymptotic formulas for upper bounds of the deviations of repeated de la Vallee Poussin sums taken over classes of periodic analytic functions. In certain cases, these formulas give a solution of the corresponding Kolmogorov–Nikolsky problem. We indicate conditions under which the repeated de la Vallee Poussin sums guarantee a better order of approximation than ordinary de la Vallee Poussin sums.

    Views (last year): 45.
  8. Agafonov A.D.
    Lower bounds for conditional gradient type methods for minimizing smooth strongly convex functions
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 213-223

    In this paper, we consider conditional gradient methods for optimizing strongly convex functions. These are methods that use a linear minimization oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the subproblem

    \[ \text{Argmin}_{x\in X}{\langle p,\,x \rangle}. \]There are a variety of conditional gradient methods that have a linear convergence rate in a strongly convex case. However, in all these methods, the dimension of the problem is included in the rate of convergence, which in modern applications can be very large. In this paper, we prove that in the strongly convex case, the convergence rate of the conditional gradient methods in the best case depends on the dimension of the problem $ n $ as $ \widetilde {\Omega} \left(\!\sqrt {n}\right) $. Thus, the conditional gradient methods may turn out to be ineffective for solving strongly convex optimization problems of large dimensions.

    Also, the application of conditional gradient methods to minimization problems of a quadratic form is considered. The effectiveness of the Frank – Wolfe method for solving the quadratic optimization problem in the convex case on a simplex (PageRank) has already been proved. This work shows that the use of conditional gradient methods to solve the minimization problem of a quadratic form in a strongly convex case is ineffective due to the presence of dimension in the convergence rate of these methods. Therefore, the Shrinking Conditional Gradient method is considered. Its difference from the conditional gradient methods is that it uses a modified linear minimization oracle. It's an oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the subproblem \[ \text{Argmin}\{\langle p, \,x \rangle\colon x\in X, \;\|x-x_0^{}\| \leqslant R \}. \] The convergence rate of such an algorithm does not depend on dimension. Using the Shrinking Conditional Gradient method the complexity (the total number of arithmetic operations) of solving the minimization problem of quadratic form on a $ \infty $-ball is obtained. The resulting evaluation of the method is comparable to the complexity of the gradient method.

  9. Silaev D.A.
    Semilocal smoothihg S-splines
    Computer Research and Modeling, 2010, v. 2, no. 4, pp. 349-357

    Semilocal smoothing splines or S-splines from class C p are considered. These splines consist of polynomials of a degree n, first p + 1 coefficients of each polynomial are determined by values of the previous polynomial and p its derivatives at the point of splice, coefficients at higher terms of the polynomial are determined by the least squares method. These conditions are supplemented by the periodicity condition for the spline function on the whole segment of definition or by initial conditions. Uniqueness and existence theorems are proved. Stability and convergence conditions for these splines are established.

    Views (last year): 1. Citations: 6 (RSCI).
  10. Rakcheeva T.A.
    Criteria and convergence of the focal approxmation
    Computer Research and Modeling, 2013, v. 5, no. 3, pp. 379-394

    Methods of the solution of a problem of focal approximation  approach on point-by-point given smooth closed empirical curve by multifocal lemniscates are investigated. Criteria and convergence of the developed approached methods with use of the description, both in real, and in complex variables are analyzed. Topological equivalence of the used criteria is proved.

    Views (last year): 2.
Pages: next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"