Результаты поиска по 'function minimization':
Найдено статей: 50
  1. Danilova M.Y., Malinovskiy G.S.
    Averaged heavy-ball method
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 277-308

    First-order optimization methods are workhorses in a wide range of modern applications in economics, physics, biology, machine learning, control, and other fields. Among other first-order methods accelerated and momentum ones obtain special attention because of their practical efficiency. The heavy-ball method (HB) is one of the first momentum methods. The method was proposed in 1964 and the first analysis was conducted for quadratic strongly convex functions. Since then a number of variations of HB have been proposed and analyzed. In particular, HB is known for its simplicity in implementation and its performance on nonconvex problems. However, as other momentum methods, it has nonmonotone behavior, and for optimal parameters, the method suffers from the so-called peak effect. To address this issue, in this paper, we consider an averaged version of the heavy-ball method (AHB). We show that for quadratic problems AHB has a smaller maximal deviation from the solution than HB. Moreover, for general convex and strongly convex functions, we prove non-accelerated rates of global convergence of AHB, its weighted version WAHB, and for AHB with restarts R-AHB. To the best of our knowledge, such guarantees for HB with averaging were not explicitly proven for strongly convex problems in the existing works. Finally, we conduct several numerical experiments on minimizing quadratic and nonquadratic functions to demonstrate the advantages of using averaging for HB. Moreover, we also tested one more modification of AHB called the tail-averaged heavy-ball method (TAHB). In the experiments, we observed that HB with a properly adjusted averaging scheme converges faster than HB without averaging and has smaller oscillations.

  2. An algorithm is proposed to identify parameters of a 2D vortex structure used on information about the flow velocity at a finite (small) set of reference points. The approach is based on using a set of point vortices as a model system and minimizing a functional that compares the model and known sets of velocity vectors in the space of model parameters. For numerical implementation, the method of gradient descent with step size control, approximation of derivatives by finite differences, and the analytical expression of the velocity field induced by the point vortex model are used. An experimental analysis of the operation of the algorithm on test flows is carried out: one and a system of several point vortices, a Rankine vortex, and a Lamb dipole. According to the velocity fields of test flows, the velocity vectors utilized for identification were arranged in a randomly distributed set of reference points (from 3 to 200 pieces). Using the computations, it was determined that: the algorithm converges to the minimum from a wide range of initial approximations; the algorithm converges in all cases when the reference points are located in areas where the streamlines of the test and model systems are topologically equivalent; if the streamlines of the systems are not topologically equivalent, then the percentage of successful calculations decreases, but convergence can also take place; when the method converges, the coordinates of the vortices of the model system are close to the centers of the vortices of the test configurations, and in many cases, the values of their circulations also; con-vergence depends more on location than on the number of vectors used for identification. The results of the study allow us to recommend the proposed algorithm for identifying 2D vortex structures whose streamlines are topologically close to systems of point vortices.

  3. Vetchanin E.V., Tenenev V.A., Kilin A.A.
    Optimal control of the motion in an ideal fluid of a screw-shaped body with internal rotors
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 741-759

    In this paper we consider the controlled motion of a helical body with three blades in an ideal fluid, which is executed by rotating three internal rotors. We set the problem of selecting control actions, which ensure the motion of the body near the predetermined trajectory. To determine controls that guarantee motion near the given curve, we propose methods based on the application of hybrid genetic algorithms (genetic algorithms with real encoding and with additional learning of the leader of the population by a gradient method) and artificial neural networks. The correctness of the operation of the proposed numerical methods is estimated using previously obtained differential equations, which define the law of changing the control actions for the predetermined trajectory.

    In the approach based on hybrid genetic algorithms, the initial problem of minimizing the integral functional reduces to minimizing the function of many variables. The given time interval is broken up into small elements, on each of which the control actions are approximated by Lagrangian polynomials of order 2 and 3. When appropriately adjusted, the hybrid genetic algorithms reproduce a solution close to exact. However, the cost of calculation of 1 second of the physical process is about 300 seconds of processor time.

    To increase the speed of calculation of control actions, we propose an algorithm based on artificial neural networks. As the input signal the neural network takes the components of the required displacement vector. The node values of the Lagrangian polynomials which approximately describe the control actions return as output signals . The neural network is taught by the well-known back-propagation method. The learning sample is generated using the approach based on hybrid genetic algorithms. The calculation of 1 second of the physical process by means of the neural network requires about 0.004 seconds of processor time, that is, 6 orders faster than the hybrid genetic algorithm. The control calculated by means of the artificial neural network differs from exact control. However, in spite of this difference, it ensures that the predetermined trajectory is followed exactly.

    Views (last year): 12. Citations: 1 (RSCI).
  4. Gasnikov A.V., Kubentayeva M.B.
    Searching stochastic equilibria in transport networks by universal primal-dual gradient method
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345

    We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.

    Views (last year): 28.
  5. Dvinskikh D.M., Pirau V.V., Gasnikov A.V.
    On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319

    In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.

    In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.

  6. Dvurechensky P.E.
    A gradient method with inexact oracle for composite nonconvex optimization
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 321-334

    In this paper, we develop a new first-order method for composite nonconvex minimization problems with simple constraints and inexact oracle. The objective function is given as a sum of «hard», possibly nonconvex part, and «simple» convex part. Informally speaking, oracle inexactness means that, for the «hard» part, at any point we can approximately calculate the value of the function and construct a quadratic function, which approximately bounds this function from above. We give several examples of such inexactness: smooth nonconvex functions with inexact H¨older-continuous gradient, functions given by the auxiliary uniformly concave maximization problem, which can be solved only approximately. For the introduced class of problems, we propose a gradient-type method, which allows one to use a different proximal setup to adapt to the geometry of the feasible set, adaptively chooses controlled oracle error, allows for inexact proximal mapping. We provide a convergence rate for our method in terms of the norm of generalized gradient mapping and show that, in the case of an inexact Hölder-continuous gradient, our method is universal with respect to Hölder parameters of the problem. Finally, in a particular case, we show that the small value of the norm of generalized gradient mapping at a point means that a necessary condition of local minimum approximately holds at that point.

  7. Kovalenko S.Yu., Yusubalieva G.M.
    Survival task for the mathematical model of glioma therapy with blood-brain barrier
    Computer Research and Modeling, 2018, v. 10, no. 1, pp. 113-123

    The paper proposes a mathematical model for the therapy of glioma, taking into account the blood-brain barrier, radiotherapy and antibody therapy. The parameters were estimated from experimental data and the evaluation of the effect of parameter values on the effectiveness of treatment and the prognosis of the disease were obtained. The possible variants of sequential use of radiotherapy and the effect of antibodies have been explored. The combined use of radiotherapy with intravenous administration of $mab$ $Cx43$ leads to a potentiation of the therapeutic effect in glioma.

    Radiotherapy must precede chemotherapy, as radio exposure reduces the barrier function of endothelial cells. Endothelial cells of the brain vessels fit tightly to each other. Between their walls are formed so-called tight contacts, whose role in the provision of BBB is that they prevent the penetration into the brain tissue of various undesirable substances from the bloodstream. Dense contacts between endothelial cells block the intercellular passive transport.

    The mathematical model consists of a continuous part and a discrete one. Experimental data on the volume of glioma show the following interesting dynamics: after cessation of radio exposure, tumor growth does not resume immediately, but there is some time interval during which glioma does not grow. Glioma cells are divided into two groups. The first group is living cells that divide as fast as possible. The second group is cells affected by radiation. As a measure of the health of the blood-brain barrier system, the ratios of the number of BBB cells at the current moment to the number of cells at rest, that is, on average healthy state, are chosen.

    The continuous part of the model includes a description of the division of both types of glioma cells, the recovery of BBB cells, and the dynamics of the drug. Reducing the number of well-functioning BBB cells facilitates the penetration of the drug to brain cells, that is, enhances the action of the drug. At the same time, the rate of division of glioma cells does not increase, since it is limited not by the deficiency of nutrients available to cells, but by the internal mechanisms of the cell. The discrete part of the mathematical model includes the operator of radio interaction, which is applied to the indicator of BBB and to glial cells.

    Within the framework of the mathematical model of treatment of a cancer tumor (glioma), the problem of optimal control with phase constraints is solved. The patient’s condition is described by two variables: the volume of the tumor and the condition of the BBB. The phase constraints delineate a certain area in the space of these indicators, which we call the survival area. Our task is to find such treatment strategies that minimize the time of treatment, maximize the patient’s rest time, and at the same time allow state indicators not to exceed the permitted limits. Since the task of survival is to maximize the patient’s lifespan, it is precisely such treatment strategies that return the indicators to their original position (and we see periodic trajectories on the graphs). Periodic trajectories indicate that the deadly disease is translated into a chronic one.

    Views (last year): 14.
  8. Puchinin S.M., Korolkov E.R., Stonyakin F.S., Alkousa M.S., Vyguzov A.A.
    Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 105-122

    In this paper, we consider two variants of the concept of sharp minimum for mathematical programming problems with quasiconvex objective function and inequality constraints. It investigated the problem of describing a variant of a simple subgradient method with switching along productive and non-productive steps, for which, on a class of problems with Lipschitz functions, it would be possible to guarantee convergence with the rate of geometric progression to the set of exact solutions or its vicinity. It is important that to implement the proposed method there is no need to know the sharp minimum parameter, which is usually difficult to estimate in practice. To overcome this problem, the authors propose to use a step adjustment procedure similar to that previously proposed by B. T. Polyak. However, in this case, in comparison with the class of problems without constraints, it arises the problem of knowing the exact minimal value of the objective function. The paper describes the conditions for the inexactness of this information, which make it possible to preserve convergence with the rate of geometric progression in the vicinity of the set of minimum points of the problem. Two analogs of the concept of a sharp minimum for problems with inequality constraints are considered. In the first one, the problem of approximation to the exact solution arises only to a pre-selected level of accuracy, for this, it is considered the case when the minimal value of the objective function is unknown; instead, it is given some approximation of this value. We describe conditions on the inexact minimal value of the objective function, under which convergence to the vicinity of the desired set of points with a rate of geometric progression is still preserved. The second considered variant of the sharp minimum does not depend on the desired accuracy of the problem. For this, we propose a slightly different way of checking whether the step is productive, which allows us to guarantee the convergence of the method to the exact solution with the rate of geometric progression in the case of exact information. Convergence estimates are proved under conditions of weak convexity of the constraints and some restrictions on the choice of the initial point, and a corollary is formulated for the convex case when the need for an additional assumption on the choice of the initial point disappears. For both approaches, it has been proven that the distance from the current point to the set of solutions decreases with increasing number of iterations. This, in particular, makes it possible to limit the requirements for the properties of the used functions (Lipschitz-continuous, sharp minimum) only for a bounded set. Some computational experiments are performed, including for the truss topology design problem.

  9. Khusainov R.R., Mamedov S.N., Savin S.I., Klimchik A.S.
    Searching for realizable energy-efficient gaits of planar five-link biped with a point contact
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 155-170

    In this paper, we discuss the procedure for finding nominal trajectories of the planar five-link bipedal robot with point contact. To this end we use a virtual constraints method that transforms robot’s dynamics to a lowdimensional zero manifold; we also use a nonlinear optimization algorithms to find virtual constraints parameters that minimize robot’s cost of transportation. We analyzed the effect of the degree of Bezier polynomials that approximate the virtual constraints and continuity of the torques on the cost of transportation. Based on numerical results we found that it is sufficient to consider polynomials with degrees between five and six, as further increase in the degree of polynomial results in increased computation time while it does not guarantee reduction of the cost of transportation. Moreover, it was shown that introduction of torque continuity constraints does not lead to significant increase of the objective function and makes the gait more implementable on a real robot.

    We propose a two step procedure for finding minimum of the considered optimization problem with objective function in the form of cost of transportation and with high number of constraints. During the first step we solve a feasibility problem: remove cost function (set it to zero) and search for feasible solution in the parameter space. During the second step we introduce the objective function and use the solution found in the first step as initial guess. For the first step we put forward an algorithm for finding initial guess that considerably reduced optimization time of the first step (down to 3–4 seconds) compared to random initialization. Comparison of the objective function of the solutions found during the first and second steps showed that on average during the second step objective function was reduced twofold, even though overall computation time increased significantly.

  10. Kashchenko N.M., Ishanov S.A., Zubkov E.V.
    Numerical model of transport in problems of instabilities of the Earth’s low-latitude ionosphere using a two-dimensional monotonized Z-scheme
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1011-1023

    The aim of the work is to study a monotone finite-difference scheme of the second order of accuracy, created on the basis of a generalization of the one-dimensional Z-scheme. The study was carried out for model equations of the transfer of an incompressible medium. The paper describes a two-dimensional generalization of the Z-scheme with nonlinear correction, using instead of streams oblique differences containing values from different time layers. The monotonicity of the obtained nonlinear scheme is verified numerically for the limit functions of two types, both for smooth solutions and for nonsmooth solutions, and numerical estimates of the order of accuracy of the constructed scheme are obtained.

    The constructed scheme is absolutely stable, but it loses the property of monotony when the Courant step is exceeded. A distinctive feature of the proposed finite-difference scheme is the minimality of its template. The constructed numerical scheme is intended for models of plasma instabilities of various scales in the low-latitude ionospheric plasma of the Earth. One of the real problems in the solution of which such equations arise is the numerical simulation of highly nonstationary medium-scale processes in the earth’s ionosphere under conditions of the appearance of the Rayleigh – Taylor instability and plasma structures with smaller scales, the generation mechanisms of which are instabilities of other types, which leads to the phenomenon F-scattering. Due to the fact that the transfer processes in the ionospheric plasma are controlled by the magnetic field, it is assumed that the plasma incompressibility condition is fulfilled in the direction transverse to the magnetic field.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"