Результаты поиска по 'factors':
Найдено статей: 130
  1. Yakovenko G.N.
    Reasons for nonlinearity: globality and noncommutativity
    Computer Research and Modeling, 2009, v. 1, no. 4, pp. 355-358

    A dynamic process modeled by ordinary differential equations is considered. If a nonautonomous system of ordinary differential equations has a general solution in a certain area, than the system can be simplified by nonautonomous substitution of variables: right parts turn to zeroes. Right parts of an autonomous system of ordinary differential equations in the neighborhood of nonsingular points can be linearized. A separable system where the right part contains linear combination of autonomous vector fields and factors are functions of independent variable is considered. If the fields commute than they can be linearized by general substitution of variables.

    Views (last year): 3.
  2. Koganov A.V.
    Uniform graph embedding into metric spaces
    Computer Research and Modeling, 2012, v. 4, no. 2, pp. 241-251

    The task of embedding an infinity countable graph into continuous metric space is considered. The concept of uniform embedding having no accumulation point in a set of vertex images and having all graph edge images of a limited length is introduced. Necessary and sufficient conditions for possibility of uniform embedding into spaces with Euclid and Lorenz metrics are stated in terms of graph structure. It is proved that tree graphs with finite branching have uniform embedding into space with absolute Minkowski metric.

  3. Shovin V.A.
    Search oblique factor structure by the "oblimax" method
    Computer Research and Modeling, 2013, v. 5, no. 2, pp. 123-130

    An improved method of oblique rotation “oblimax” is suggested. The problem of choosing the pairs of factors is discussed. A novel sequence to select pairs of factors that lead to the best factor structure by “oblimax” is suggested.

  4. Sviridenko A.B.
    Designing a zero on a linear manifold, a polyhedron, and a vertex of a polyhedron. Newton methods of minimization
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 563-591

    We consider the approaches to the construction of methods for solving four-dimensional programming problems for calculating directions for multiple minimizations of smooth functions on a set of a given set of linear equalities. The approach consists of two stages.

    At the first stage, the problem of quadratic programming is transformed by a numerically stable direct multiplicative algorithm into an equivalent problem of designing the origin of coordinates on a linear manifold, which defines a new mathematical formulation of the dual quadratic problem. For this, a numerically stable direct multiplicative method for solving systems of linear equations is proposed, taking into account the sparsity of matrices presented in packaged form. The advantage of this approach is to calculate the modified Cholesky factors to construct a substantially positive definite matrix of the system of equations and its solution in the framework of one procedure. And also in the possibility of minimizing the filling of the main rows of multipliers without losing the accuracy of the results, and no changes are made in the position of the next processed row of the matrix, which allows the use of static data storage formats.

    At the second stage, the necessary and sufficient optimality conditions in the form of Kuhn–Tucker determine the calculation of the direction of descent — the solution of the dual quadratic problem is reduced to solving a system of linear equations with symmetric positive definite matrix for calculating of Lagrange's coefficients multipliers and to substituting the solution into the formula for calculating the direction of descent.

    It is proved that the proposed approach to the calculation of the direction of descent by numerically stable direct multiplicative methods at one iteration requires a cubic law less computation than one iteration compared to the well-known dual method of Gill and Murray. Besides, the proposed method allows the organization of the computational process from any starting point that the user chooses as the initial approximation of the solution.

    Variants of the problem of designing the origin of coordinates on a linear manifold, a convex polyhedron and a vertex of a convex polyhedron are presented. Also the relationship and implementation of methods for solving these problems are described.

    Views (last year): 6.
  5. Microarray datasets are highly dimensional, with a small number of collected samples in comparison to thousands of features. This poses a significant challenge that affects the interpretation, applicability and validation of the analytical results. Matrix factorizations have proven to be a useful method for describing data in terms of a small number of meta-features, which reduces noise, while still capturing the essential features of the data. Three novel and mutually relevant methods are presented in this paper: 1) gradient-based matrix factorization with two adaptive learning rates (in accordance with the number of factor matrices) and their automatic updates; 2) nonparametric criterion for the selection of the number of factors; and 3) nonnegative version of the gradient-based matrix factorization which doesn't require any extra computational costs in difference to the existing methods. We demonstrate effectiveness of the proposed methods to the supervised classification of gene expression data.

    Citations: 4 (RSCI).
  6. Zelenkov G.A., Sviridenko A.B.
    Approach to development of algorithms of Newtonian methods of unconstrained optimization, their software implementation and benchmarking
    Computer Research and Modeling, 2013, v. 5, no. 3, pp. 367-377

    The approach to increase efficiency of Gill and Murray's algorithm of Newtonian methods of unconstrained optimization with step adjustment creation is offered, rests on Cholesky’s factorization. It is proved that the strategy of choice of the descent direction also determines the solution of the problem of scaling of steps at descent, and approximation by non-quadratic functions, and integration with a method of a confidential vicinity.

    Views (last year): 2. Citations: 7 (RSCI).
  7. Sviridenko A.B.
    Direct multiplicative methods for sparse matrices. Linear programming
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 143-165

    Multiplicative methods for sparse matrices are best suited to reduce the complexity of operations solving systems of linear equations performed on each iteration of the simplex method. The matrix of constraints in these problems of sparsely populated nonzero elements, which allows to obtain the multipliers, the main columns which are also sparse, and the operation of multiplication of a vector by a multiplier according to the complexity proportional to the number of nonzero elements of this multiplier. In addition, the transition to the adjacent basis multiplier representation quite easily corrected. To improve the efficiency of such methods requires a decrease in occupancy multiplicative representation of the nonzero elements. However, at each iteration of the algorithm to the sequence of multipliers added another. As the complexity of multiplication grows and linearly depends on the length of the sequence. So you want to run from time to time the recalculation of inverse matrix, getting it from the unit. Overall, however, the problem is not solved. In addition, the set of multipliers is a sequence of structures, and the size of this sequence is inconvenient is large and not precisely known. Multiplicative methods do not take into account the factors of the high degree of sparseness of the original matrices and constraints of equality, require the determination of initial basic feasible solution of the problem and, consequently, do not allow to reduce the dimensionality of a linear programming problem and the regular procedure of compression — dimensionality reduction of multipliers and exceptions of the nonzero elements from all the main columns of multipliers obtained in previous iterations. Thus, the development of numerical methods for the solution of linear programming problems, which allows to overcome or substantially reduce the shortcomings of the schemes implementation of the simplex method, refers to the current problems of computational mathematics.

    In this paper, the approach to the construction of numerically stable direct multiplier methods for solving problems in linear programming, taking into account sparseness of matrices, presented in packaged form. The advantage of the approach is to reduce dimensionality and minimize filling of the main rows of multipliers without compromising accuracy of the results and changes in the position of the next processed row of the matrix are made that allows you to use static data storage formats.

    As a direct continuation of this work is the basis for constructing a direct multiplicative algorithm set the direction of descent in the Newton methods for unconstrained optimization is proposed to put a modification of the direct multiplier method, linear programming by integrating one of the existing design techniques significantly positive definite matrix of the second derivatives.

    Views (last year): 10. Citations: 2 (RSCI).
  8. Gasnikov A.V., Gorbunov E.A., Kovalev D.A., Mohammed A.A., Chernousova E.O.
    The global rate of convergence for optimal tensor methods in smooth convex optimization
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 737-753

    In this work we consider Monteiro – Svaiter accelerated hybrid proximal extragradient (A-HPE) framework and accelerated Newton proximal extragradient (A-NPE) framework. The last framework contains an optimal method for rather smooth convex optimization problems with second-order oracle. We generalize A-NPE framework for higher order derivative oracle (schemes). We replace Newton’s type step in A-NPE that was used for auxiliary problem by Newton’s regularized (tensor) type step (Yu. Nesterov, 2018). Moreover we generalize large step A-HPE/A-NPE framework by replacing Monteiro – Svaiter’s large step condition so that this framework could work for high-order schemes. The main contribution of the paper is as follows: we propose optimal highorder methods for convex optimization problems. As far as we know for that moment there exist only zero, first and second order optimal methods that work according to the lower bounds. For higher order schemes there exists a gap between the lower bounds (Arjevani, Shamir, Shiff, 2017) and existing high-order (tensor) methods (Nesterov – Polyak, 2006; Yu.Nesterov, 2008; M. Baes, 2009; Yu.Nesterov, 2018). Asymptotically the ratio of the rates of convergences for the best existing methods and lower bounds is about 1.5. In this work we eliminate this gap and show that lower bounds are tight. We also consider rather smooth strongly convex optimization problems and show how to generalize the proposed methods to this case. The basic idea is to use restart technique until iteration sequence reach the region of quadratic convergence of Newton method and then use Newton method. One can show that the considered method converges with optimal rates up to a logarithmic factor. Note, that proposed in this work technique can be generalized in the case when we can’t solve auxiliary problem exactly, moreover we can’t even calculate the derivatives of the functional exactly. Moreover, the proposed technique can be generalized to the composite optimization problems and in particular to the constraint convex optimization problems. We also formulate a list of open questions that arise around the main result of this paper (optimal universal method of high order e.t.c.).

    Views (last year): 75.
  9. A calculation method for boundaries of quality classes for quantitative systems characteristics of any nature is suggested. The method allows to determine interactions which are not detectable using correlation and regression analysis; quality classes’ boundaries of systems’ condition indicator and boundaries of the factors influencing this condition; contribution of the factors to a degree of «inadmissibility» of indicator values; sufficiency of the program observing the factors to describe the causes of «inadmissibility» of indicator values.

    Views (last year): 1. Citations: 6 (RSCI).
  10. Sviridenko A.B.
    Direct multiplicative methods for sparse matrices. Quadratic programming
    Computer Research and Modeling, 2018, v. 10, no. 4, pp. 407-420

    A numerically stable direct multiplicative method for solving systems of linear equations that takes into account the sparseness of matrices presented in a packed form is considered. The advantage of the method is the calculation of the Cholesky factors for a positive definite matrix of the system of equations and its solution within the framework of one procedure. And also in the possibility of minimizing the filling of the main rows of multipliers without losing the accuracy of the results, and no changes are made to the position of the next processed row of the matrix, which allows using static data storage formats. The solution of the system of linear equations by a direct multiplicative algorithm is, like the solution with LU-decomposition, just another scheme for implementing the Gaussian elimination method.

    The calculation of the Cholesky factors for a positive definite matrix of the system and its solution underlies the construction of a new mathematical formulation of the unconditional problem of quadratic programming and a new form of specifying necessary and sufficient conditions for optimality that are quite simple and are used in this paper to construct a new mathematical formulation for the problem of quadratic programming on a polyhedral set of constraints, which is the problem of finding the minimum distance between the origin ordinate and polyhedral boundary by means of a set of constraints and linear algebra dimensional geometry.

    To determine the distance, it is proposed to apply the known exact method based on solving systems of linear equations whose dimension is not higher than the number of variables of the objective function. The distances are determined by the construction of perpendiculars to the faces of a polyhedron of different dimensions. To reduce the number of faces examined, the proposed method involves a special order of sorting the faces. Only the faces containing the vertex closest to the point of the unconditional extremum and visible from this point are subject to investigation. In the case of the presence of several nearest equidistant vertices, we investigate a face containing all these vertices and faces of smaller dimension that have at least two common nearest vertices with the first face.

    Views (last year): 32.
Pages: next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"