Результаты поиска по 'linearization method':
Найдено статей: 116
  1. Matyushkin I.V., Zapletina M.A.
    Computer research of the holomorphic dynamics of exponential and linear-exponential maps
    Computer Research and Modeling, 2018, v. 10, no. 4, pp. 383-405

    The work belongs to the direction of experimental mathematics, which investigates the properties of mathematical objects by the computing facilities of a computer. The base is an exponential map, its topological properties (Cantor's bouquets) differ from properties of polynomial and rational complex-valued functions. The subject of the study are the character and features of the Fatou and Julia sets, as well as the equilibrium points and orbits of the zero of three iterated complex-valued mappings: $f:z \to (1+ \mu) \exp (iz)$, $g : z \to \big(1+ \mu |z - z^*|\big) \exp (iz)$, $h : z \to \big(1+ \mu (z - z^* )\big) \exp (iz)$, with $z,\mu \in \mathbb{C}$, $z^* : \exp (iz^*) = z^*$. For a quasilinear map g having no analyticity characteristic, two bifurcation transitions were discovered: the creation of a new equilibrium point (for which the critical value of the linear parameter was found and the bifurcation consists of “fork” type and “saddle”-node transition) and the transition to the radical transformation of the Fatou set. A nontrivial character of convergence to a fixed point is revealed, which is associated with the appearance of “valleys” on the graph of convergence rates. For two other maps, the monoperiodicity of regimes is significant, the phenomenon of “period doubling” is noted (in one case along the path $39\to 3$, in the other along the path $17\to 2$), and the coincidence of the period multiplicity and the number of sleeves of the Julia spiral in a neighborhood of a fixed point is found. A rich illustrative material, numerical results of experiments and summary tables reflecting the parametric dependence of maps are given. Some questions are formulated in the paper for further research using traditional mathematics methods.

    Views (last year): 51. Citations: 1 (RSCI).
  2. Rovenska O.G.
    Approximation of analytic functions by repeated de la Vallee Poussin sums
    Computer Research and Modeling, 2019, v. 11, no. 3, pp. 367-377

    The paper deals with the problems of approximation of periodic functions of high smoothness by arithmetic means of Fourier sums. The simplest and natural example of a linear process of approximation of continuous periodic functions of a real variable is the approximation of these functions by partial sums of the Fourier series. However, the sequences of partial Fourier sums are not uniformly convergent over the entire class of continuous $2\pi$-periodic functions. In connection with this, a significant number of papers is devoted to the study of the approximative properties of other approximation methods, which are generated by certain transformations of the partial sums of Fourier series and allow us to construct sequences of trigonometrical polynomials that would be uniformly convergent for each function $f \in C$. In particular, over the past decades, de la Vallee Poussin sums and Fejer sums have been widely studied. One of the most important directions in this field is the study of the asymptotic behavior of upper bounds of deviations of arithmetic means of Fourier sums on different classes of periodic functions. Methods of investigation of integral representations of deviations of polynomials on the classes of periodic differentiable functions of real variable originated and received its development through the works of S.M. Nikol’sky, S.B. Stechkin, N.P. Korneichuk, V.K. Dzadyk, etc.

    The aim of the work systematizes known results related to the approximation of classes of periodic functions of high smoothness by arithmetic means of Fourier sums, and presents new facts obtained for particular cases. In the paper is studied the approximative properties of $r$-repeated de la Vallee Poussin sums on the classes of periodic functions that can be regularly extended into the fixed strip of the complex plane. We obtain asymptotic formulas for upper bounds of the deviations of repeated de la Vallee Poussin sums taken over classes of periodic analytic functions. In certain cases, these formulas give a solution of the corresponding Kolmogorov–Nikolsky problem. We indicate conditions under which the repeated de la Vallee Poussin sums guarantee a better order of approximation than ordinary de la Vallee Poussin sums.

    Views (last year): 45.
  3. Fasondini M., Hale N., Spoerer R., Weideman J.A.C.
    Quadratic Padé Approximation: Numerical Aspects and Applications
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1017-1031

    Padé approximation is a useful tool for extracting singularity information from a power series. A linear Padé approximant is a rational function and can provide estimates of pole and zero locations in the complex plane. A quadratic Padé approximant has square root singularities and can, therefore, provide additional information such as estimates of branch point locations. In this paper, we discuss numerical aspects of computing quadratic Padé approximants as well as some applications. Two algorithms for computing the coefficients in the approximant are discussed: a direct method involving the solution of a linear system (well-known in the mathematics community) and a recursive method (well-known in the physics community). We compare the accuracy of these two methods when implemented in floating-point arithmetic and discuss their pros and cons. In addition, we extend Luke’s perturbation analysis of linear Padé approximation to the quadratic case and identify the problem of spurious branch points in the quadratic approximant, which can cause a significant loss of accuracy. A possible remedy for this problem is suggested by noting that these troublesome points can be identified by the recursive method mentioned above. Another complication with the quadratic approximant arises in choosing the appropriate branch. One possibility, which is to base this choice on the linear approximant, is discussed in connection with an example due to Stahl. It is also known that the quadratic method is capable of providing reasonable approximations on secondary sheets of the Riemann surface, a fact we illustrate here by means of an example. Two concluding applications show the superiority of the quadratic approximant over its linear counterpart: one involving a special function (the Lambert $W$-function) and the other a nonlinear PDE (the continuation of a solution of the inviscid Burgers equation into the complex plane).

  4. Ryabtsev A.B.
    The error accumulation in the conjugate gradient method for degenerate problem
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 459-472

    In this paper, we consider the conjugate gradient method for solving the problem of minimizing a quadratic function with additive noise in the gradient. Three concepts of noise were considered: antagonistic noise in the linear term, stochastic noise in the linear term and noise in the quadratic term, as well as combinations of the first and second with the last. It was experimentally obtained that error accumulation is absent for any of the considered concepts, which differs from the folklore opinion that, as in accelerated methods, error accumulation must take place. The paper gives motivation for why the error may not accumulate. The dependence of the solution error both on the magnitude (scale) of the noise and on the size of the solution using the conjugate gradient method was also experimentally investigated. Hypotheses about the dependence of the error in the solution on the noise scale and the size (2-norm) of the solution are proposed and tested for all the concepts considered. It turned out that the error in the solution (by function) linearly depends on the noise scale. The work contains graphs illustrating each individual study, as well as a detailed description of numerical experiments, which includes an account of the methods of noise of both the vector and the matrix.

  5. Agafonov A.D.
    Lower bounds for conditional gradient type methods for minimizing smooth strongly convex functions
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 213-223

    In this paper, we consider conditional gradient methods for optimizing strongly convex functions. These are methods that use a linear minimization oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the subproblem

    \[ \text{Argmin}_{x\in X}{\langle p,\,x \rangle}. \]There are a variety of conditional gradient methods that have a linear convergence rate in a strongly convex case. However, in all these methods, the dimension of the problem is included in the rate of convergence, which in modern applications can be very large. In this paper, we prove that in the strongly convex case, the convergence rate of the conditional gradient methods in the best case depends on the dimension of the problem $ n $ as $ \widetilde {\Omega} \left(\!\sqrt {n}\right) $. Thus, the conditional gradient methods may turn out to be ineffective for solving strongly convex optimization problems of large dimensions.

    Also, the application of conditional gradient methods to minimization problems of a quadratic form is considered. The effectiveness of the Frank – Wolfe method for solving the quadratic optimization problem in the convex case on a simplex (PageRank) has already been proved. This work shows that the use of conditional gradient methods to solve the minimization problem of a quadratic form in a strongly convex case is ineffective due to the presence of dimension in the convergence rate of these methods. Therefore, the Shrinking Conditional Gradient method is considered. Its difference from the conditional gradient methods is that it uses a modified linear minimization oracle. It's an oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the subproblem \[ \text{Argmin}\{\langle p, \,x \rangle\colon x\in X, \;\|x-x_0^{}\| \leqslant R \}. \] The convergence rate of such an algorithm does not depend on dimension. Using the Shrinking Conditional Gradient method the complexity (the total number of arithmetic operations) of solving the minimization problem of quadratic form on a $ \infty $-ball is obtained. The resulting evaluation of the method is comparable to the complexity of the gradient method.

  6. Bozhko A.N.
    Modeling of disassembly processes of complex products
    Computer Research and Modeling, 2022, v. 14, no. 3, pp. 525-537

    The work is devoted to modeling the processes of disassembling complex products in CADsystems. The ability to dismantle a product in a given sequence is formed at the early design stages, and is implemented at the end of the life cycle. Therefore, modern CAD-systems should have tools for assessing the complexity of dismantling parts and assembly units of a product. A hypergraph model of the mechanical structure of the product is proposed. It is shown that the mathematical description of coherent and sequential disassembly operations is the normal cutting of the edge of the hypergraph. A theorem on the properties of normal cuts is proved. This theorem allows us to organize a simple recursive procedure for generating all cuts of the hypergraph. The set of all cuts is represented as an AND/OR-tree. The tree contains information about plans for disassembling the product and its parts. Mathematical descriptions of various types of disassembly processes are proposed: complete, incomplete, linear, nonlinear. It is shown that the decisive graph of the AND/OR-tree is a model of disassembling the product and all its components obtained in the process of dismantling. An important characteristic of the complexity of dismantling parts is considered — the depth of nesting. A method of effective calculation of the estimate from below has been developed for this characteristic.

  7. Berger A.I., Guda S.A.
    Optimal threshold selection algorithms for multi-label classification: property study
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238

    Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the $F$-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation $\boldsymbol V$, defined on a unit square on the plane of average precision $P$ and recall $R$. Using this transformation, two algorithms are proposed for optimization: the $F$-measure linearization method and the method of $\boldsymbol V$ domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the $F$-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of $\boldsymbol V$, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of $\boldsymbol V$ domain analysis — the polar radius.

  8. Sviridenko A.B.
    The iterations’ number estimation for strongly polynomial linear programming algorithms
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 249-285

    A direct algorithm for solving a linear programming problem (LP), given in canonical form, is considered. The algorithm consists of two successive stages, in which the following LP problems are solved by a direct method: a non-degenerate auxiliary problem at the first stage and some problem equivalent to the original one at the second. The construction of the auxiliary problem is based on a multiplicative version of the Gaussian exclusion method, in the very structure of which there are possibilities: identification of incompatibility and linear dependence of constraints; identification of variables whose optimal values are obviously zero; the actual exclusion of direct variables and the reduction of the dimension of the space in which the solution of the original problem is determined. In the process of actual exclusion of variables, the algorithm generates a sequence of multipliers, the main rows of which form a matrix of constraints of the auxiliary problem, and the possibility of minimizing the filling of the main rows of multipliers is inherent in the very structure of direct methods. At the same time, there is no need to transfer information (basis, plan and optimal value of the objective function) to the second stage of the algorithm and apply one of the ways to eliminate looping to guarantee final convergence.

    Two variants of the algorithm for solving the auxiliary problem in conjugate canonical form are presented. The first one is based on its solution by a direct algorithm in terms of the simplex method, and the second one is based on solving a problem dual to it by the simplex method. It is shown that both variants of the algorithm for the same initial data (inputs) generate the same sequence of points: the basic solution and the current dual solution of the vector of row estimates. Hence, it is concluded that the direct algorithm is an algorithm of the simplex method type. It is also shown that the comparison of numerical schemes leads to the conclusion that the direct algorithm allows to reduce, according to the cubic law, the number of arithmetic operations necessary to solve the auxiliary problem, compared with the simplex method. An estimate of the number of iterations is given.

  9. Chukanov S.N., Pershina E.L.
    Formation of optimal control of nonlinear dynamic object based on Takagi–Sugeno model
    Computer Research and Modeling, 2015, v. 7, no. 1, pp. 51-59

    The algorithm of fuzzy control system essentially nonlinear dynamic object is considered in this article. For solving nonlinear optimal control problem is proposed to use the method of linear quadratic regulation (LQR) with fuzzy Takagi–Sugeno model. The algorithm can be used for the design of deterministic optimal control of nonlinear objects. The algorithm of optimal control for controlling the rotational motion of a space vehicle is proposed.

    Views (last year): 2.
  10. Sviridenko A.B.
    Direct multiplicative methods for sparse matrices. Newton methods
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 679-703

    We consider a numerically stable direct multiplicative algorithm of solving linear equations systems, which takes into account the sparseness of matrices presented in a packed form. The advantage of the algorithm is the ability to minimize the filling of the main rows of multipliers without losing the accuracy of the results. Moreover, changes in the position of the next processed row of the matrix are not made, what allows using static data storage formats. Linear system solving by a direct multiplicative algorithm is, like the solving with $LU$-decomposition, just another scheme of the Gaussian elimination method implementation.

    In this paper, this algorithm is the basis for solving the following problems:

    Problem 1. Setting the descent direction in Newtonian methods of unconditional optimization by integrating one of the known techniques of constructing an essentially positive definite matrix. This approach allows us to weaken or remove additional specific difficulties caused by the need to solve large equation systems with sparse matrices presented in a packed form.

    Problem 2. Construction of a new mathematical formulation of the problem of quadratic programming and a new form of specifying necessary and sufficient optimality conditions. They are quite simple and can be used to construct mathematical programming methods, for example, to find the minimum of a quadratic function on a polyhedral set of constraints, based on solving linear equations systems, which dimension is not higher than the number of variables of the objective function.

    Problem 3. Construction of a continuous analogue of the problem of minimizing a real quadratic polynomial in Boolean variables and a new form of defining necessary and sufficient conditions of optimality for the development of methods for solving them in polynomial time. As a result, the original problem is reduced to the problem of finding the minimum distance between the origin and the angular point of a convex polyhedron, which is a perturbation of the $n$-dimensional cube and is described by a system of double linear inequalities with an upper triangular matrix of coefficients with units on the main diagonal. Only two faces are subject to investigation, one of which or both contains the vertices closest to the origin. To calculate them, it is sufficient to solve $4n – 4$ linear equations systems and choose among them all the nearest equidistant vertices in polynomial time. The problem of minimizing a quadratic polynomial is $NP$-hard, since an $NP$-hard problem about a vertex covering for an arbitrary graph comes down to it. It follows therefrom that $P = NP$, which is based on the development beyond the limits of integer optimization methods.

    Views (last year): 7. Citations: 1 (RSCI).
Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"