Результаты поиска по 'minimization method':
Найдено статей: 55
  1. Lobanov A.I., Mirov F.Kh.
    On the using the differential schemes to transport equation with drain in grid modeling
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1149-1164

    Modern power transportation systems are the complex engineering systems. Such systems include both point facilities (power producers, consumers, transformer substations, etc.) and the distributed elements (f.e. power lines). Such structures are presented in the form of the graphs with different types of nodes under creating the mathematical models. It is necessary to solve the system of partial differential equations of the hyperbolic type to study the dynamic effects in such systems.

    An approach similar to one already applied in modeling similar problems earlier used in the work. New variant of the splitting method was used proposed by the authors. Unlike most known works, the splitting is not carried out according to physical processes (energy transport without dissipation, separately dissipative processes). We used splitting to the transport equations with the drain and the exchange between Reimann’s invariants. This splitting makes possible to construct the hybrid schemes for Riemann invariants with a high order of approximation and minimal dissipation error. An example of constructing such a hybrid differential scheme is described for a single-phase power line. The difference scheme proposed is based on the analysis of the properties of the schemes in the space of insufficient coefficients.

    Examples of the model problem numerical solutions using the proposed splitting and the difference scheme are given. The results of the numerical calculations shows that the difference scheme allows to reproduce the arising regions of large gradients. It is shown that the difference schemes also allow detecting resonances in such the systems.

  2. Chen J., Lobanov A.V., Rogozin A.V.
    Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480

    Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.

    We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.

  3. Ablaev S.S., Makarenko D.V., Stonyakin F.S., Alkousa M.S., Baran I.V.
    Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 473-495

    Non-smooth optimization often arises in many applied problems. The issues of developing efficient computational procedures for such problems in high-dimensional spaces are very topical. First-order methods (subgradient methods) are well applicable here, but in fairly general situations they lead to low speed guarantees for large-scale problems. One of the approaches to this type of problem can be to identify a subclass of non-smooth problems that allow relatively optimistic results on the rate of convergence. For example, one of the options for additional assumptions can be the condition of a sharp minimum, proposed in the late 1960s by B. T. Polyak. In the case of the availability of information about the minimal value of the function for Lipschitz-continuous problems with a sharp minimum, it turned out to be possible to propose a subgradient method with a Polyak step-size, which guarantees a linear rate of convergence in the argument. This approach made it possible to cover a number of important applied problems (for example, the problem of projecting onto a convex compact set). However, both the condition of the availability of the minimal value of the function and the condition of a sharp minimum itself look rather restrictive. In this regard, in this paper, we propose a generalized condition for a sharp minimum, somewhat similar to the inexact oracle proposed recently by Devolder – Glineur – Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover, the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex nonsmooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak $\beta$-quasi-convexity and ordinary quasiconvexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the $\delta$-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence onto the feasible set of the problem.

  4. Rusyak I.G., Nefedov D.G.
    Solution of optimization problem of wood fuel facility location by the thermal energy cost criterion
    Computer Research and Modeling, 2012, v. 4, no. 3, pp. 651-659

    The paper contains a mathematical model for the optimal location of enterprises producing fuel from renewable wood waste for the regional distributed heating supply system. Optimization is based on total cost minimization of the end product – the thermal energy from wood fuel. A method for solving the problem is based on genetic algorithm. The paper also shows the practical results of the model by example of Udmurt Republic.

    Views (last year): 5. Citations: 2 (RSCI).
  5. Tupitsa N.K.
    On accelerated adaptive methods and their modifications for alternating minimization
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 497-515

    In the first part of the paper we present convergence analysis of AGMsDR method on a new class of functions — in general non-convex with $M$-Lipschitz-continuous gradients that satisfy Polyak – Lojasiewicz condition. Method does not need the value of $\mu^{PL}>0$ in the condition and converges linearly with a scale factor $\left(1 - \frac{\mu^{PL}}{M}\right)$. It was previously proved that method converges as $O\left(\frac1{k^2}\right)$ if a function is convex and has $M$-Lipschitz-continuous gradient and converges linearly with a~scale factor $\left(1 - \sqrt{\frac{\mu^{SC}}{M}}\right)$ if the value of strong convexity parameter $\mu^{SC}>0$ is known. The novelty is that one can save linear convergence if $\frac{\mu^{PL}}{\mu^{SC}}$ is not known, but without square root in the scale factor.

    The second part presents modification of AGMsDR method for solving problems that allow alternating minimization (Alternating AGMsDR). The similar results are proved.

    As the result, we present adaptive accelerated methods that converge as $O\left(\min\left\lbrace\frac{M}{k^2},\,\left(1-{\frac{\mu^{PL}}{M}}\right)^{(k-1)}\right\rbrace\right)$ on a class of convex functions with $M$-Lipschitz-continuous gradient that satisfy Polyak – Lojasiewicz condition. Algorithms do not need values of $M$ and $\mu^{PL}$. If Polyak – Lojasiewicz condition does not hold, the convergence is $O\left(\frac1{k^2}\right)$, but no tuning needed.

    We also consider the adaptive catalyst envelope of non-accelerated gradient methods. The envelope allows acceleration up to $O\left(\frac1{k^2}\right)$. We present numerical comparison of non-accelerated adaptive gradient descent which is accelerated using adaptive catalyst envelope with AGMsDR, Alternating AGMsDR, APDAGD (Adaptive Primal-Dual Accelerated Gradient Descent) and Sinkhorn's algorithm on the problem dual to the optimal transport problem.

    Conducted experiments show faster convergence of alternating AGMsDR in comparison with described catalyst approach and AGMsDR, despite the same asymptotic rate $O\left(\frac1{k^2}\right)$. Such behavior can be explained by linear convergence of AGMsDR method and was tested on quadratic functions. Alternating AGMsDR demonstrated better performance in comparison with AGMsDR.

Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"