Результаты поиска по 'optimal method':
Найдено статей: 135
  1. Mikhailenko S.A., Sheremet M.A.
    Simulation of convective-radiative heat transfer in a differentially heated rotating cavity
    Computer Research and Modeling, 2018, v. 10, no. 2, pp. 195-207

    Mathematical simulation of unsteady natural convection and thermal surface radiation within a rotating square enclosure was performed. The considered domain of interest had two isothermal opposite walls subjected to constant low and high temperatures, while other walls are adiabatic. The walls were diffuse and gray. The considered cavity rotated with constant angular velocity relative to the axis that was perpendicular to the cavity and crossed the cavity in the center. Mathematical model, formulated in dimensionless transformed variables “stream function – vorticity” using the Boussinesq approximation and diathermic approach for the medium, was performed numerically using the finite difference method. The vorticity dispersion equation and energy equation were solved using locally one-dimensional Samarskii scheme. The diffusive terms were approximated by central differences, while the convective terms were approximated using monotonic Samarskii scheme. The difference equations were solved by the Thomas algorithm. The approximated Poisson equation for the stream function was solved by successive over-relaxation method. Optimal value of the relaxation parameter was found on the basis of computational experiments. Radiative heat transfer was analyzed using the net-radiation method in Poljak approach. The developed computational code was tested using the grid independence analysis and experimental and numerical results for the model problem.

    Numerical analysis of unsteady natural convection and thermal surface radiation within the rotating enclosure was performed for the following parameters: Ra = 103–106, Ta = 0–105, Pr = 0.7, ε = 0–0.9. All distributions were obtained for the twentieth complete revolution when one can find the periodic behavior of flow and heat transfer. As a result we revealed that at low angular velocity the convective flow can intensify but the following growth of angular velocity leads to suppression of the convective flow. The radiative Nusselt number changes weakly with the Taylor number.

    Views (last year): 20.
  2. Gasnikov A.V., Kovalev D.A.
    A hypothesis about the rate of global convergence for optimal methods (Newton’s type) in smooth convex optimization
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 305-314

    In this paper we discuss lower bounds for convergence of convex optimization methods of high order and attainability of this bounds. We formulate a hypothesis that covers all the cases. It is noticeable that we provide this statement without a proof. Newton method is the most famous method that uses gradient and Hessian of optimized function. However, it converges locally even for strongly convex functions. Global convergence can be achieved with cubic regularization of Newton method [Nesterov, Polyak, 2006], whose iteration cost is comparable with iteration cost of Newton method and is equivalent to inversion of Hessian of optimized function. Yu.Nesterov proposed accelerated variant of Newton method with cubic regularization in 2008 [Nesterov, 2008]. R.Monteiro and B. Svaiter managed to improve global convergence of cubic regularized method in 2013 [Monteiro, Svaiter, 2013]. Y.Arjevani, O. Shamir and R. Shiff showed that convergence bound of Monteiro and Svaiter is optimal (cannot be improved by more than logarithmic factor with any second order method) in 2017 [Arjevani et al., 2017]. They also managed to find bounds for convex optimization methods of p-th order for $p ≥ 2$. However, they got bounds only for first and second order methods for strongly convex functions. In 2018 Yu.Nesterov proposed third order convex optimization methods with rate of convergence that is close to this lower bounds and with similar to Newton method cost of iteration [Nesterov, 2018]. Consequently, it was showed that high order methods can be practical. In this paper we formulate lower bounds for p-th order methods for $p ≥ 3$ for strongly convex unconstrained optimization problems. This paper can be viewed as a little survey of state of the art of high order optimization methods.

    Views (last year): 21. Citations: 1 (RSCI).
  3. Stonyakin F.S., Stepanov A.N., Gasnikov A.V., Titov A.A.
    Mirror descent for constrained optimization problems with large subgradient values of functional constraints
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 301-317

    The paper is devoted to the problem of minimization of the non-smooth functional $f$ with a non-positive non-smooth Lipschitz-continuous functional constraint. We consider the formulation of the problem in the case of quasi-convex functionals. We propose new strategies of step-sizes and adaptive stopping rules in Mirror Descent for the considered class of problems. It is shown that the methods are applicable to the objective functionals of various levels of smoothness. Applying a special restart technique to the considered version of Mirror Descent there was proposed an optimal method for optimization problems with strongly convex objective functionals. Estimates of the rate of convergence for the considered methods are obtained depending on the level of smoothness of the objective functional. These estimates indicate the optimality of the considered methods from the point of view of the theory of lower oracle bounds. In particular, the optimality of our approach for Höldercontinuous quasi-convex (sub)differentiable objective functionals is proved. In addition, the case of a quasiconvex objective functional and functional constraint was considered. In this paper, we consider the problem of minimizing a non-smooth functional $f$ in the presence of a Lipschitz-continuous non-positive non-smooth functional constraint $g$, and the problem statement in the cases of quasi-convex and strongly (quasi-)convex functionals is considered separately. The paper presents numerical experiments demonstrating the advantages of using the considered methods.

  4. Ivanova A.S., Omelchenko S.S., Kotliarova E.V., Matyukhin V.V.
    Calibration of model parameters for calculating correspondence matrix for Moscow
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 961-978

    In this paper, we consider the problem of restoring the correspondence matrix based on the observations of real correspondences in Moscow. Following the conventional approach [Gasnikov et al., 2013], the transport network is considered as a directed graph whose edges correspond to road sections and the graph vertices correspond to areas that the traffic participants leave or enter. The number of city residents is considered constant. The problem of restoring the correspondence matrix is to calculate all the correspondence from the $i$ area to the $j$ area.

    To restore the matrix, we propose to use one of the most popular methods of calculating the correspondence matrix in urban studies — the entropy model. In our work, which is based on the work [Wilson, 1978], we describe the evolutionary justification of the entropy model and the main idea of the transition to solving the problem of entropy-linear programming (ELP) in calculating the correspondence matrix. To solve the ELP problem, it is proposed to pass to the dual problem. In this paper, we describe several numerical optimization methods for solving this problem: the Sinkhorn method and the Accelerated Sinkhorn method. We provide numerical experiments for the following variants of cost functions: a linear cost function and a superposition of the power and logarithmic cost functions. In these functions, the cost is a combination of average time and distance between areas, which depends on the parameters. The correspondence matrix is calculated for multiple sets of parameters and then we calculate the quality of the restored matrix relative to the known correspondence matrix.

    We assume that the noise in the restored correspondence matrix is Gaussian, as a result, we use the standard deviation as a quality metric. The article provides an overview of gradient-free optimization methods for solving non-convex problems. Since the number of parameters of the cost function is small, we use the grid search method to find the optimal parameters of the cost function. Thus, the correspondence matrix calculated for each set of parameters and then the quality of the restored matrix is evaluated relative to the known correspondence matrix. Further, according to the minimum residual value for each cost function, we determine for which cost function and at what parameter values the restored matrix best describes real correspondence.

  5. Fedina A.A., Nurgaliev A.I., Skvortsova D.A.
    Comparison of the results of using various evolution algorithms to solve the problem of route optimization of unmanned vehicles
    Computer Research and Modeling, 2022, v. 14, no. 1, pp. 45-62

    In this paper, a comparative analysis of the exact and heuristic algorithms presented by the method of branches and boundaries, genetic and ant algorithms, respectively, is carried out to find the optimal solution to the traveling salesman problem using the example of a courier robot. The purpose of the work is to determine the running time, the length of the obtained route and the amount of memory required for the program to work, using the method of branches and boundaries and evolutionary heuristic algorithms. Also, the most appropriate of the listed methods for use in the specified conditions is determined. This article uses the materials of the conducted research, implemented in the format of a computer program, the program code for which is implemented in Python. In the course of the study, a number of criteria for the applicability of algorithms were selected (the time of the program, the length of the constructed route and the amount of memory necessary for the program to work), the results of the algorithms were obtained under specified conditions and conclusions were drawn about the degree of expediency of using one or another algorithm in various specified conditions of the courier robot. During the study, it turned out that for a small number of points  $\leqslant10$, the method of branches and boundaries is the most preferable, since it finds the optimal solution faster. However, when calculating the route by this method, provided that the points increase by more than 10, the operating time increases exponentially. In this case, more effective results are obtained by a heuristic approach using a genetic and ant algorithm. At the same time, the ant algorithm is distinguished by solutions that are closest to the reference ones and with an increase of more than 16 points. Its relative disadvantage is the greatest resource intensity among the considered algorithms. The genetic algorithm gives similar results, but after increasing the points more than 16, the length of the found route increases relative to the reference one. The advantage of the genetic algorithm is its lower resource intensity compared to other algorithms.

    The practical significance of this article lies in the potential possibility of using the results obtained for the optimal solution of logistics problems by an automated system in various fields: warehouse logistics, transport logistics, «last mile» logistics, etc.

  6. Gladin E.L., Zainullina K.E.
    Ellipsoid method for convex stochastic optimization in small dimension
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1137-1147

    The article considers minimization of the expectation of convex function. Problems of this type often arise in machine learning and a variety of other applications. In practice, stochastic gradient descent (SGD) and similar procedures are usually used to solve such problems. We propose to use the ellipsoid method with mini-batching, which converges linearly and can be more efficient than SGD for a class of problems. This is verified by our experiments, which are publicly available. The algorithm does not require neither smoothness nor strong convexity of the objective to achieve linear convergence. Thus, its complexity does not depend on the conditional number of the problem. We prove that the method arrives at an approximate solution with given probability when using mini-batches of size proportional to the desired accuracy to the power −2. This enables efficient parallel execution of the algorithm, whereas possibilities for batch parallelization of SGD are rather limited. Despite fast convergence, ellipsoid method can result in a greater total number of calls to oracle than SGD, which works decently with small batches. Complexity is quadratic in dimension of the problem, hence the method is suitable for relatively small dimensionalities.

  7. Bazarova A.I., Beznosikov A.N., Gasnikov A.V.
    Linearly convergent gradient-free methods for minimization of parabolic approximation
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 239-255

    Finding the global minimum of a nonconvex function is one of the key and most difficult problems of the modern optimization. In this paper we consider special classes of nonconvex problems which have a clear and distinct global minimum.

    In the first part of the paper we consider two classes of «good» nonconvex functions, which can be bounded below and above by a parabolic function. This class of problems has not been widely studied in the literature, although it is rather interesting from an applied point of view. Moreover, for such problems first-order and higher-order methods may be completely ineffective in finding a global minimum. This is due to the fact that the function may oscillate heavily or may be very noisy. Therefore, our new methods use only zero-order information and are based on grid search. The size and fineness of this grid, and hence the guarantee of convergence speed and oracle complexity, depend on the «goodness» of the problem. In particular, we show that if the function is bounded by fairly close parabolic functions, then the complexity is independent of the dimension of the problem. We show that our new methods converge with a linear convergence rate $\log(1/\varepsilon)$ to a global minimum on the cube.

    In the second part of the paper, we consider the nonconvex optimization problem from a different angle. We assume that the target minimizing function is the sum of the convex quadratic problem and a nonconvex «noise» function proportional to the distance to the global solution. Considering functions with such noise assumptions for zero-order methods is new in the literature. For such a problem, we use the classical gradient-free approach with gradient approximation through finite differences. We show how the convergence analysis for our problems can be reduced to the standard analysis for convex optimization problems. In particular, we achieve a linear convergence rate for such problems as well.

    Experimental results confirm the efficiency and practical applicability of all the obtained methods.

  8. Vostrikov D.D., Konin G.O., Lobanov A.V., Matyukhin V.V.
    Influence of the mantissa finiteness on the accuracy of gradient-free optimization methods
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 259-280

    Gradient-free optimization methods or zeroth-order methods are widely used in training neural networks, reinforcement learning, as well as in industrial tasks where only the values of a function at a point are available (working with non-analytical functions). In particular, the method of error back propagation in PyTorch works exactly on this principle. There is a well-known fact that computer calculations use heuristics of floating-point numbers, and because of this, the problem of finiteness of the mantissa arises.

    In this paper, firstly, we reviewed the most popular methods of gradient approximation: Finite forward/central difference (FFD/FCD), Forward/Central wise component (FWC/CWC), Forward/Central randomization on $l_2$ sphere (FSSG2/CFFG2); secondly, we described current theoretical representations of the noise introduced by the inaccuracy of calculating the function at a point: adversarial noise, random noise; thirdly, we conducted a series of experiments on frequently encountered classes of problems, such as quadratic problem, logistic regression, SVM, to try to determine whether the real nature of machine noise corresponds to the existing theory. It turned out that in reality (at least for those classes of problems that were considered in this paper), machine noise turned out to be something between adversarial noise and random, and therefore the current theory about the influence of the mantissa limb on the search for the optimum in gradient-free optimization problems requires some adjustment.

  9. Nefedova O.A., Spevak L.P., Kazakov A.L., Lee M.G.
    Solution to a two-dimensional nonlinear heat equation using null field method
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1449-1467

    The paper deals with a heat wave motion problem for a degenerate second-order nonlinear parabolic equation with power nonlinearity. The considered boundary condition specifies in a plane the motion equation of the circular zero front of the heat wave. A new numerical-analytical algorithm for solving the problem is proposed. A solution is constructed stepby- step in time using difference time discretization. At each time step, a boundary value problem for the Poisson equation corresponding to the original equation at a fixed time is considered. This problem is, in fact, an inverse Cauchy problem in the domain whose initial boundary is free of boundary conditions and two boundary conditions (Neumann and Dirichlet) are specified on a current boundary (heat wave). A solution of this problem is constructed as the sum of a particular solution to the nonhomogeneous Poisson equation and a solution to the corresponding Laplace equation satisfying the boundary conditions. Since the inhomogeneity depends on the desired function and its derivatives, an iterative solution procedure is used. The particular solution is sought by the collocation method using inhomogeneity expansion in radial basis functions. The inverse Cauchy problem for the Laplace equation is solved by the null field method as applied to a circular domain with a circular hole. This method is used for the first time to solve such problem. The calculation algorithm is optimized by parallelizing the computations. The parallelization of the computations allows us to realize effectively the algorithm on high performance computing servers. The algorithm is implemented as a program, which is parallelized by using the OpenMP standard for the C++ language, suitable for calculations with parallel cycles. The effectiveness of the algorithm and the robustness of the program are tested by the comparison of the calculation results with the known exact solution as well as with the numerical solution obtained earlier by the authors with the use of the boundary element method. The implemented computational experiment shows good convergence of the iteration processes and higher calculation accuracy of the proposed new algorithm than of the previously developed one. The solution analysis allows us to select the radial basis functions which are most suitable for the proposed algorithm.

  10. Sviridenko A.B., Zelenkov G.A.
    Correlation and realization of quasi-Newton methods of absolute optimization
    Computer Research and Modeling, 2016, v. 8, no. 1, pp. 55-78

    Newton and quasi-Newton methods of absolute optimization based on Cholesky factorization with adaptive step and finite difference approximation of the first and the second derivatives. In order to raise effectiveness of the quasi-Newton methods a modified version of Cholesky decomposition of quasi-Newton matrix is suggested. It solves the problem of step scaling while descending, allows approximation by non-quadratic functions, and integration with confidential neighborhood method. An approach to raise Newton methods effectiveness with finite difference approximation of the first and second derivatives is offered. The results of numerical research of algorithm effectiveness are shown.

    Views (last year): 7. Citations: 5 (RSCI).
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"