Результаты поиска по 'gradient descent':
Найдено статей: 15
  1. Microarray datasets are highly dimensional, with a small number of collected samples in comparison to thousands of features. This poses a significant challenge that affects the interpretation, applicability and validation of the analytical results. Matrix factorizations have proven to be a useful method for describing data in terms of a small number of meta-features, which reduces noise, while still capturing the essential features of the data. Three novel and mutually relevant methods are presented in this paper: 1) gradient-based matrix factorization with two adaptive learning rates (in accordance with the number of factor matrices) and their automatic updates; 2) nonparametric criterion for the selection of the number of factors; and 3) nonnegative version of the gradient-based matrix factorization which doesn't require any extra computational costs in difference to the existing methods. We demonstrate effectiveness of the proposed methods to the supervised classification of gene expression data.

    Citations: 4 (RSCI).
  2. Gasnikov A.V., Gorbunov E.A., Kovalev D.A., Mohammed A.A., Chernousova E.O.
    The global rate of convergence for optimal tensor methods in smooth convex optimization
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 737-753

    In this work we consider Monteiro – Svaiter accelerated hybrid proximal extragradient (A-HPE) framework and accelerated Newton proximal extragradient (A-NPE) framework. The last framework contains an optimal method for rather smooth convex optimization problems with second-order oracle. We generalize A-NPE framework for higher order derivative oracle (schemes). We replace Newton’s type step in A-NPE that was used for auxiliary problem by Newton’s regularized (tensor) type step (Yu. Nesterov, 2018). Moreover we generalize large step A-HPE/A-NPE framework by replacing Monteiro – Svaiter’s large step condition so that this framework could work for high-order schemes. The main contribution of the paper is as follows: we propose optimal highorder methods for convex optimization problems. As far as we know for that moment there exist only zero, first and second order optimal methods that work according to the lower bounds. For higher order schemes there exists a gap between the lower bounds (Arjevani, Shamir, Shiff, 2017) and existing high-order (tensor) methods (Nesterov – Polyak, 2006; Yu.Nesterov, 2008; M. Baes, 2009; Yu.Nesterov, 2018). Asymptotically the ratio of the rates of convergences for the best existing methods and lower bounds is about 1.5. In this work we eliminate this gap and show that lower bounds are tight. We also consider rather smooth strongly convex optimization problems and show how to generalize the proposed methods to this case. The basic idea is to use restart technique until iteration sequence reach the region of quadratic convergence of Newton method and then use Newton method. One can show that the considered method converges with optimal rates up to a logarithmic factor. Note, that proposed in this work technique can be generalized in the case when we can’t solve auxiliary problem exactly, moreover we can’t even calculate the derivatives of the functional exactly. Moreover, the proposed technique can be generalized to the composite optimization problems and in particular to the constraint convex optimization problems. We also formulate a list of open questions that arise around the main result of this paper (optimal universal method of high order e.t.c.).

    Views (last year): 75.
  3. Alkousa M.S.
    On some stochastic mirror descent methods for constrained online optimization problems
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 205-217

    The problem of online convex optimization naturally occurs in cases when there is an update of statistical information. The mirror descent method is well known for non-smooth optimization problems. Mirror descent is an extension of the subgradient method for solving non-smooth convex optimization problems in the case of a non-Euclidean distance. This paper is devoted to a stochastic variant of recently proposed Mirror Descent methods for convex online optimization problems with convex Lipschitz (generally, non-smooth) functional constraints. This means that we can still use the value of the functional constraint, but instead of (sub)gradient of the objective functional and the functional constraint, we use their stochastic (sub)gradients. More precisely, assume that on a closed subset of $n$-dimensional vector space, $N$ convex Lipschitz non-smooth functionals are given. The problem is to minimize the arithmetic mean of these functionals with a convex Lipschitz constraint. Two methods are proposed, for solving this problem, using stochastic (sub)gradients: adaptive method (does not require knowledge of Lipschitz constant neither for the objective functional, nor for the functional of constraint) and non-adaptivemethod (requires knowledge of Lipschitz constant for the objective functional and the functional of constraint). Note that it is allowed to calculate the stochastic (sub)gradient of each functional only once. In the case of non-negative regret, we find that the number of non-productive steps is $O$($N$), which indicates the optimality of the proposed methods. We consider an arbitrary proximal structure, which is essential for decisionmaking problems. The results of numerical experiments are presented, allowing to compare the work of adaptive and non-adaptive methods for some examples. It is shown that the adaptive method can significantly improve the number of the found solutions.

    Views (last year): 42.
  4. Antonov I.V., Bruttan I.V.
    Synthesis of the structure of organised systems as central problem of evolutionary cybernetics
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1103-1124

    The article provides approaches to evolutionary modelling of synthesis of organised systems and analyses methodological problems of evolutionary computations of this kind. Based on the analysis of works on evolutionary cybernetics, evolutionary theory, systems theory and synergetics, we conclude that there are open problems in formalising the synthesis of organised systems and modelling their evolution. The article emphasises that the theoretical basis for the practice of evolutionary modelling is the principles of the modern synthetic theory of evolution. Our software project uses a virtual computing environment for machine synthesis of problem solving algorithms. In the process of modelling, we obtained the results on the basis of which we conclude that there are a number of conditions that fundamentally limit the applicability of genetic programming methods in the tasks of synthesis of functional structures. The main limitations are the need for the fitness function to track the step-by-step approach to the solution of the problem and the inapplicability of this approach to the problems of synthesis of hierarchically organised systems. We note that the results obtained in the practice of evolutionary modelling in general for the whole time of its existence, confirm the conclusion the possibilities of genetic programming are fundamentally limited in solving problems of synthesizing the structure of organized systems. As sources of fundamental difficulties for machine synthesis of system structures the article points out the absence of directions for gradient descent in structural synthesis and the absence of regularity of random appearance of new organised structures. The considered problems are relevant for the theory of biological evolution. The article substantiates the statement about the biological specificity of practically possible ways of synthesis of the structure of organised systems. As a theoretical interpretation of the discussed problem, we propose to consider the system-evolutionary concept of P.K.Anokhin. The process of synthesis of functional structures in this context is an adaptive response of organisms to external conditions based on their ability to integrative synthesis of memory, needs and information about current conditions. The results of actual studies are in favour of this interpretation. We note that the physical basis of biological integrativity may be related to the phenomena of non-locality and non-separability characteristic of quantum systems. The problems considered in this paper are closely related to the problem of creating strong artificial intelligence.

  5. Pasechnyuk D.A., Stonyakin F.S.
    One method for minimization a convex Lipschitz-continuous function of two variables on a fixed square
    Computer Research and Modeling, 2019, v. 11, no. 3, pp. 379-395

    In the article we have obtained some estimates of the rate of convergence for the recently proposed by Yu. E.Nesterov method of minimization of a convex Lipschitz-continuous function of two variables on a square with a fixed side. The idea of the method is to divide the square into smaller parts and gradually remove them so that in the remaining sufficiently small part. The method consists in solving auxiliary problems of one-dimensional minimization along the separating segments and does not imply the calculation of the exact value of the gradient of the objective functional. The main result of the paper is proved in the class of smooth convex functions having a Lipschitz-continuous gradient. Moreover, it is noted that the property of Lipschitzcontinuity for gradient is sufficient to require not on the whole square, but only on some segments. It is shown that the method can work in the presence of errors in solving auxiliary one-dimensional problems, as well as in calculating the direction of gradients. Also we describe the situation when it is possible to neglect or reduce the time spent on solving auxiliary one-dimensional problems. For some examples, experiments have demonstrated that the method can work effectively on some classes of non-smooth functions. In this case, an example of a simple non-smooth function is constructed, for which, if the subgradient is chosen incorrectly, even if the auxiliary one-dimensional problem is exactly solved, the convergence property of the method may not hold. Experiments have shown that the method under consideration can achieve the desired accuracy of solving the problem in less time than the other methods (gradient descent and ellipsoid method) considered. Partially, it is noted that with an increase in the accuracy of the desired solution, the operating time for the Yu. E. Nesterov’s method can grow slower than the time of the ellipsoid method.

    Views (last year): 34.
  6. Gladin E.L., Zainullina K.E.
    Ellipsoid method for convex stochastic optimization in small dimension
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1137-1147

    The article considers minimization of the expectation of convex function. Problems of this type often arise in machine learning and a variety of other applications. In practice, stochastic gradient descent (SGD) and similar procedures are usually used to solve such problems. We propose to use the ellipsoid method with mini-batching, which converges linearly and can be more efficient than SGD for a class of problems. This is verified by our experiments, which are publicly available. The algorithm does not require neither smoothness nor strong convexity of the objective to achieve linear convergence. Thus, its complexity does not depend on the conditional number of the problem. We prove that the method arrives at an approximate solution with given probability when using mini-batches of size proportional to the desired accuracy to the power −2. This enables efficient parallel execution of the algorithm, whereas possibilities for batch parallelization of SGD are rather limited. Despite fast convergence, ellipsoid method can result in a greater total number of calls to oracle than SGD, which works decently with small batches. Complexity is quadratic in dimension of the problem, hence the method is suitable for relatively small dimensionalities.

  7. Vostrikov D.D., Konin G.O., Lobanov A.V., Matyukhin V.V.
    Influence of the mantissa finiteness on the accuracy of gradient-free optimization methods
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 259-280

    Gradient-free optimization methods or zeroth-order methods are widely used in training neural networks, reinforcement learning, as well as in industrial tasks where only the values of a function at a point are available (working with non-analytical functions). In particular, the method of error back propagation in PyTorch works exactly on this principle. There is a well-known fact that computer calculations use heuristics of floating-point numbers, and because of this, the problem of finiteness of the mantissa arises.

    In this paper, firstly, we reviewed the most popular methods of gradient approximation: Finite forward/central difference (FFD/FCD), Forward/Central wise component (FWC/CWC), Forward/Central randomization on $l_2$ sphere (FSSG2/CFFG2); secondly, we described current theoretical representations of the noise introduced by the inaccuracy of calculating the function at a point: adversarial noise, random noise; thirdly, we conducted a series of experiments on frequently encountered classes of problems, such as quadratic problem, logistic regression, SVM, to try to determine whether the real nature of machine noise corresponds to the existing theory. It turned out that in reality (at least for those classes of problems that were considered in this paper), machine noise turned out to be something between adversarial noise and random, and therefore the current theory about the influence of the mantissa limb on the search for the optimum in gradient-free optimization problems requires some adjustment.

  8. Kutalev A.A., Lapina A.A.
    Modern ways to overcome neural networks catastrophic forgetting and empirical investigations on their structural issues
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 45-56

    This paper presents the results of experimental validation of some structural issues concerning the practical use of methods to overcome catastrophic forgetting of neural networks. A comparison of current effective methods like EWC (Elastic Weight Consolidation) and WVA (Weight Velocity Attenuation) is made and their advantages and disadvantages are considered. It is shown that EWC is better for tasks where full retention of learned skills is required on all the tasks in the training queue, while WVA is more suitable for sequential tasks with very limited computational resources, or when reuse of representations and acceleration of learning from task to task is required rather than exact retention of the skills. The attenuation of the WVA method must be applied to the optimization step, i. e. to the increments of neural network weights, rather than to the loss function gradient itself, and this is true for any gradient optimization method except the simplest stochastic gradient descent (SGD). The choice of the optimal weights attenuation function between the hyperbolic function and the exponent is considered. It is shown that hyperbolic attenuation is preferable because, despite comparable quality at optimal values of the hyperparameter of the WVA method, it is more robust to hyperparameter deviations from the optimal value (this hyperparameter in the WVA method provides a balance between preservation of old skills and learning a new skill). Empirical observations are presented that support the hypothesis that the optimal value of this hyperparameter does not depend on the number of tasks in the sequential learning queue. And, consequently, this hyperparameter can be picked up on a small number of tasks and used on longer sequences.

  9. Grenkin G.V.
    On the uniqueness of identification of reaction rate parameters in a combustion model
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1469-1476

    A model of combustion of premixed mixture of gases with one global chemical reaction is considered, the model includes equations of the second order for temperature of mixture and concentrations of fuel and oxidizer, and the right-hand sides of these equations contain the reaction rate function. This function depends on five unknown parameters of the global reaction and serves as approximation to multistep reaction mechanism. The model is reduced, after replacement of variables, to one equation of the second order for temperature of mixture that transforms to a first-order equation for temperature derivative depending on temperature that contains a parameter of flame propagation velocity. Thus, for computing the parameter of burning velocity, one has to solve Dirichlet problem for first-order equation, and after that a model dependence of burning velocity on mixture equivalence ratio at specified reaction rate parameters will be obtained. Given the experimental data of dependence of burning velocity on mixture equivalence ratio, the problem of optimal selection of reaction rate parameters is stated, based on minimization of the mean square deviation of model values of burning velocity on experimental ones. The aim of our study is analysis of uniqueness of this problem solution. To this end, we apply computational experiment during which the problem of global search of optima is solved using multistart of gradient descent. The computational experiment clarifies that the inverse problem in this statement is underdetermined, and every time, when running gradient descent from a selected starting point, it converges to a new limit point. The structure of the set of limit points in the five-dimensional space is analyzed, and it is shown that this set can be described with three linear equations. Therefore, it might be incorrect to tabulate all five parameters of reaction rate based on just one match criterion between model and experimental data of flame propagation velocity. The conclusion of our study is that in order to tabulate reaction rate parameters correctly, it is necessary to specify the values of two of them, based on additional optimality criteria.

  10. Pletnev N.V.
    Fast adaptive by constants of strong-convexity and Lipschitz for gradient first order methods
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 947-963

    The work is devoted to the construction of efficient and applicable to real tasks first-order methods of convex optimization, that is, using only values of the target function and its derivatives. Construction uses OGMG, fast gradient method which is optimal by complexity, but requires to know the Lipschitz constant for gradient and the strong convexity constant to determine the number of steps and step length. This requirement makes practical usage very hard. An adaptive on the constant for strong convexity algorithm ACGM is proposed, based on restarts of the OGM-G with update of the strong convexity constant estimate, and an adaptive on the Lipschitz constant for gradient ALGM, in which the use of OGM-G restarts is supplemented by the selection of the Lipschitz constant with verification of the smoothness conditions used in the universal gradient descent method. This eliminates the disadvantages of the original method associated with the need to know these constants, which makes practical usage possible. Optimality of estimates for the complexity of the constructed algorithms is proved. To verify the results obtained, experiments on model functions and real tasks from machine learning are carried out.

Pages: next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"