Результаты поиска по 'gradient-free optimization':
Найдено статей: 3
  1. Ivanova A.S., Omelchenko S.S., Kotliarova E.V., Matyukhin V.V.
    Calibration of model parameters for calculating correspondence matrix for Moscow
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 961-978

    In this paper, we consider the problem of restoring the correspondence matrix based on the observations of real correspondences in Moscow. Following the conventional approach [Gasnikov et al., 2013], the transport network is considered as a directed graph whose edges correspond to road sections and the graph vertices correspond to areas that the traffic participants leave or enter. The number of city residents is considered constant. The problem of restoring the correspondence matrix is to calculate all the correspondence from the $i$ area to the $j$ area.

    To restore the matrix, we propose to use one of the most popular methods of calculating the correspondence matrix in urban studies — the entropy model. In our work, which is based on the work [Wilson, 1978], we describe the evolutionary justification of the entropy model and the main idea of the transition to solving the problem of entropy-linear programming (ELP) in calculating the correspondence matrix. To solve the ELP problem, it is proposed to pass to the dual problem. In this paper, we describe several numerical optimization methods for solving this problem: the Sinkhorn method and the Accelerated Sinkhorn method. We provide numerical experiments for the following variants of cost functions: a linear cost function and a superposition of the power and logarithmic cost functions. In these functions, the cost is a combination of average time and distance between areas, which depends on the parameters. The correspondence matrix is calculated for multiple sets of parameters and then we calculate the quality of the restored matrix relative to the known correspondence matrix.

    We assume that the noise in the restored correspondence matrix is Gaussian, as a result, we use the standard deviation as a quality metric. The article provides an overview of gradient-free optimization methods for solving non-convex problems. Since the number of parameters of the cost function is small, we use the grid search method to find the optimal parameters of the cost function. Thus, the correspondence matrix calculated for each set of parameters and then the quality of the restored matrix is evaluated relative to the known correspondence matrix. Further, according to the minimum residual value for each cost function, we determine for which cost function and at what parameter values the restored matrix best describes real correspondence.

  2. Bazarova A.I., Beznosikov A.N., Gasnikov A.V.
    Linearly convergent gradient-free methods for minimization of parabolic approximation
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 239-255

    Finding the global minimum of a nonconvex function is one of the key and most difficult problems of the modern optimization. In this paper we consider special classes of nonconvex problems which have a clear and distinct global minimum.

    In the first part of the paper we consider two classes of «good» nonconvex functions, which can be bounded below and above by a parabolic function. This class of problems has not been widely studied in the literature, although it is rather interesting from an applied point of view. Moreover, for such problems first-order and higher-order methods may be completely ineffective in finding a global minimum. This is due to the fact that the function may oscillate heavily or may be very noisy. Therefore, our new methods use only zero-order information and are based on grid search. The size and fineness of this grid, and hence the guarantee of convergence speed and oracle complexity, depend on the «goodness» of the problem. In particular, we show that if the function is bounded by fairly close parabolic functions, then the complexity is independent of the dimension of the problem. We show that our new methods converge with a linear convergence rate $\log(1/\varepsilon)$ to a global minimum on the cube.

    In the second part of the paper, we consider the nonconvex optimization problem from a different angle. We assume that the target minimizing function is the sum of the convex quadratic problem and a nonconvex «noise» function proportional to the distance to the global solution. Considering functions with such noise assumptions for zero-order methods is new in the literature. For such a problem, we use the classical gradient-free approach with gradient approximation through finite differences. We show how the convergence analysis for our problems can be reduced to the standard analysis for convex optimization problems. In particular, we achieve a linear convergence rate for such problems as well.

    Experimental results confirm the efficiency and practical applicability of all the obtained methods.

  3. Vostrikov D.D., Konin G.O., Lobanov A.V., Matyukhin V.V.
    Influence of the mantissa finiteness on the accuracy of gradient-free optimization methods
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 259-280

    Gradient-free optimization methods or zeroth-order methods are widely used in training neural networks, reinforcement learning, as well as in industrial tasks where only the values of a function at a point are available (working with non-analytical functions). In particular, the method of error back propagation in PyTorch works exactly on this principle. There is a well-known fact that computer calculations use heuristics of floating-point numbers, and because of this, the problem of finiteness of the mantissa arises.

    In this paper, firstly, we reviewed the most popular methods of gradient approximation: Finite forward/central difference (FFD/FCD), Forward/Central wise component (FWC/CWC), Forward/Central randomization on $l_2$ sphere (FSSG2/CFFG2); secondly, we described current theoretical representations of the noise introduced by the inaccuracy of calculating the function at a point: adversarial noise, random noise; thirdly, we conducted a series of experiments on frequently encountered classes of problems, such as quadratic problem, logistic regression, SVM, to try to determine whether the real nature of machine noise corresponds to the existing theory. It turned out that in reality (at least for those classes of problems that were considered in this paper), machine noise turned out to be something between adversarial noise and random, and therefore the current theory about the influence of the mantissa limb on the search for the optimum in gradient-free optimization problems requires some adjustment.

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"