Результаты поиска по 'smoothing':
Найдено статей: 64
  1. Kashchenko N.M., Ishanov S.A., Zubkov E.V.
    Numerical model of transport in problems of instabilities of the Earth’s low-latitude ionosphere using a two-dimensional monotonized Z-scheme
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1011-1023

    The aim of the work is to study a monotone finite-difference scheme of the second order of accuracy, created on the basis of a generalization of the one-dimensional Z-scheme. The study was carried out for model equations of the transfer of an incompressible medium. The paper describes a two-dimensional generalization of the Z-scheme with nonlinear correction, using instead of streams oblique differences containing values from different time layers. The monotonicity of the obtained nonlinear scheme is verified numerically for the limit functions of two types, both for smooth solutions and for nonsmooth solutions, and numerical estimates of the order of accuracy of the constructed scheme are obtained.

    The constructed scheme is absolutely stable, but it loses the property of monotony when the Courant step is exceeded. A distinctive feature of the proposed finite-difference scheme is the minimality of its template. The constructed numerical scheme is intended for models of plasma instabilities of various scales in the low-latitude ionospheric plasma of the Earth. One of the real problems in the solution of which such equations arise is the numerical simulation of highly nonstationary medium-scale processes in the earth’s ionosphere under conditions of the appearance of the Rayleigh – Taylor instability and plasma structures with smaller scales, the generation mechanisms of which are instabilities of other types, which leads to the phenomenon F-scattering. Due to the fact that the transfer processes in the ionospheric plasma are controlled by the magnetic field, it is assumed that the plasma incompressibility condition is fulfilled in the direction transverse to the magnetic field.

  2. Ostroukhov P.A., Kamalov R.A., Dvurechensky P.E., Gasnikov A.V.
    Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 357-376

    In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.

    For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.

    Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our aforementioned algorithm with a global linear convergence and a local quadratic convergence rate.

    Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.

  3. Mikishanina E.A., Platonov P.S.
    Motion control by a highly maneuverable mobile robot in the task of following an object
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1301-1321

    This article is devoted to the development of an algorithm for trajectory control of a highly maneuverable four-wheeled robotic transport platform equipped with mecanum wheels, in order to organize its movement behind some moving object. The calculation of the kinematic ratios of this platform in a fixed coordinate system is presented, which is necessary to determine the angular velocities of the robot wheels depending on a given velocity vector. An algorithm has been developed for the robot to follow a mobile object on a plane without obstacles based on the use of a modified chase method using different types of control functions. The chase method consists in the fact that the velocity vector of the geometric center of the platform is co-directed with the vector connecting the geometric center of the platform and the moving object. Two types of control functions are implemented: piecewise and constant. The piecewise function means control with switching modes depending on the distance from the robot to the target. The main feature of the piecewise function is a smooth change in the robot’s speed. Also, the control functions are divided according to the nature of behavior when the robot approaches the target. When using one of the piecewise functions, the robot’s movement slows down when a certain distance between the robot and the target is reached and stops completely at a critical distance. Another type of behavior when approaching the target is to change the direction of the velocity vector to the opposite, if the distance between the platform and the object is the minimum allowable, which avoids collisions when the target moves in the direction of the robot. This type of behavior when approaching the goal is implemented for a piecewise and constant function. Numerical simulation of the robot control algorithm for various control functions in the task of chasing a target, where the target moves in a circle, is performed. The pseudocode of the control algorithm and control functions is presented. Graphs of the robot’s trajectory when moving behind the target, speed changes, changes in the angular velocities of the wheels from time to time for various control functions are shown.

  4. Ostroukhov P.A.
    Tensor methods inside mixed oracle for min-min problems
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 377-398

    In this article we consider min-min type of problems or minimization by two groups of variables. In some way it is similar to classic min-max saddle point problem. Although, saddle point problems are usually more difficult in some way. Min-min problems may occur in case if some groups of variables in convex optimization have different dimensions or if these groups have different domains. Such problem structure gives us an ability to split the main task to subproblems, and allows to tackle it with mixed oracles. However existing articles on this topic cover only zeroth and first order oracles, in our work we consider high-order tensor methods to solve inner problem and fast gradient method to solve outer problem.

    We assume, that outer problem is constrained to some convex compact set, and for the inner problem we consider both unconstrained case and being constrained to some convex compact set. By definition, tensor methods use high-order derivatives, so the time per single iteration of the method depends a lot on the dimensionality of the problem it solves. Therefore, we suggest, that the dimension of the inner problem variable is not greater than 1000. Additionally, we need some specific assumptions to be able to use mixed oracles. Firstly, we assume, that the objective is convex in both groups of variables and its gradient by both variables is Lipschitz continuous. Secondly, we assume the inner problem is strongly convex and its gradient is Lipschitz continuous. Also, since we are going to use tensor methods for inner problem, we need it to be p-th order Lipschitz continuous ($p > 1$). Finally, we assume strong convexity of the outer problem to be able to use fast gradient method for strongly convex functions.

    We need to emphasize, that we use superfast tensor method to tackle inner subproblem in unconstrained case. And when we solve inner problem on compact set, we use accelerated high-order composite proximal method.

    Additionally, in the end of the article we compare the theoretical complexity of obtained methods with regular gradient method, which solves the mentioned problem as regular convex optimization problem and doesn’t take into account its structure (Remarks 1 and 2).

  5. Stonyakin F.S., Savchuk O.S., Baran I.V., Alkousa M.S., Titov A.A.
    Analogues of the relative strong convexity condition for relatively smooth problems and adaptive gradient-type methods
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 413-432

    This paper is devoted to some variants of improving the convergence rate guarantees of the gradient-type algorithms for relatively smooth and relatively Lipschitz-continuous problems in the case of additional information about some analogues of the strong convexity of the objective function. We consider two classes of problems, namely, convex problems with a relative functional growth condition, and problems (generally, non-convex) with an analogue of the Polyak – Lojasiewicz gradient dominance condition with respect to Bregman divergence. For the first type of problems, we propose two restart schemes for the gradient type methods and justify theoretical estimates of the convergence of two algorithms with adaptively chosen parameters corresponding to the relative smoothness or Lipschitz property of the objective function. The first of these algorithms is simpler in terms of the stopping criterion from the iteration, but for this algorithm, the near-optimal computational guarantees are justified only on the class of relatively Lipschitz-continuous problems. The restart procedure of another algorithm, in its turn, allowed us to obtain more universal theoretical results. We proved a near-optimal estimate of the complexity on the class of convex relatively Lipschitz continuous problems with a functional growth condition. We also obtained linear convergence rate guarantees on the class of relatively smooth problems with a functional growth condition. For a class of problems with an analogue of the gradient dominance condition with respect to the Bregman divergence, estimates of the quality of the output solution were obtained using adaptively selected parameters. We also present the results of some computational experiments illustrating the performance of the methods for the second approach at the conclusion of the paper. As examples, we considered a linear inverse Poisson problem (minimizing the Kullback – Leibler divergence), its regularized version which allows guaranteeing a relative strong convexity of the objective function, as well as an example of a relatively smooth and relatively strongly convex problem. In particular, calculations show that a relatively strongly convex function may not satisfy the relative variant of the gradient dominance condition.

  6. Aksenov A.A., Pokhilko V.I., Moryak A.P.
    Usage of boundary layer grids in numerical simulations of viscous phenomena in of ship hydrodynamics problems
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 995-1008

    Numerical simulation of hull flow, marine propellers and other basic problems of ship hydrodynamics using Cartesian adaptive locally-refined grids is advantageous with respect to numerical setup and makes an express analysis very convenient. However, when more accurate viscous phenomena are needed, they condition some problems including a sharp increase of cell number due to high levels of main grid adaptation needed to resolve boundary layers and time step decrease in simulations with a free surface due to decrease of transit time in adapted cells. To avoid those disadvantages, additional boundary layer grids are suggested for resolution of boundary layers. The boundary layer grids are one-dimensional adaptations of main grid layers nearest to a wall, which are built along a normal direction. The boundary layer grids are additional (or chimerical), their volumes are not subtracted from main grid volumes. Governing equations of flow are integrated in both grids simultaneously, and the solutions are merged according to a special algorithm. In simulations of ship hull flow boundary layer grids are able to provide sufficient conditions for low-Reynolds turbulence models and significantly improve flow structure in continues boundary layers along smooth surfaces. When there are flow separations or other complex phenomena on a hull surface, it can be subdivided into regions, and the boundary layer grids should be applied to the regions with simple flow only. This still provides a drastic decrease of computational efforts. In simulations of marine propellers, the boundary layer grids are able to provide refuse of wall functions on blade surfaces, what leads to significantly more accurate hydrodynamic forces. Altering number and configuration of boundary grid layers, it is possible to vary a boundary layer resolution without change of a main grid. This makes the boundary layer grids a suitable tool to investigate scale effects in both problems considered.

  7. Krivovichev G.V.
    Difference splitting schemes for the system of one-dimensional equations of hemodynamics
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 459-488

    The work is devoted to the construction and analysis of difference schemes for a system of hemodynamic equations obtained by averaging the hydrodynamic equations of a viscous incompressible fluid over the vessel cross-section. Models of blood as an ideal and as a viscous Newtonian fluid are considered. Difference schemes that approximate equations with second order on the spatial variable are proposed. The computational algorithms of the constructed schemes are based on the method of splitting on physical processes. According to this approach, at one time step, the model equations are considered separately and sequentially. The practical implementation of the proposed schemes at each time step leads to a sequential solution of two linear systems with tridiagonal matrices. It is demonstrated that the schemes are $\rho$-stable under minor restrictions on the time step in the case of sufficiently smooth solutions.

    For the problem with a known analytical solution, it is demonstrated that the numerical solution has a second order convergence in a wide range of spatial grid step. The proposed schemes are compared with well-known explicit schemes, such as the Lax – Wendroff, Lax – Friedrichs and McCormack schemes in computational experiments on modeling blood flow in model vascular systems. It is demonstrated that the results obtained using the proposed schemes are close to the results obtained using other computational schemes, including schemes constructed by other approaches to spatial discretization. It is demonstrated that in the case of different spatial grids, the time of computation for the proposed schemes is significantly less than in the case of explicit schemes, despite the need to solve systems of linear equations at each step. The disadvantages of the schemes are the limitation on the time step in the case of discontinuous or strongly changing solutions and the need to use extrapolation of values at the boundary points of the vessels. In this regard, problems on the adaptation of splitting schemes for problems with discontinuous solutions and in cases of special types of conditions at the vessels ends are perspective for further research.

  8. Tomonin Y.D., Tominin V.D., Borodich E.D., Kovalev D.A., Dvurechensky P.E., Gasnikov A.V., Chukanov S.V.
    On Accelerated Methods for Saddle-Point Problems with Composite Structure
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 433-467

    We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and dual variables. First, we consider such problems with smooth composite terms, one of which has finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for noncomposite setting. Besides, our algorithms allow one to separate the complexity bounds, i. e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.

  9. Savchuk O.S., Titov A.A., Stonyakin F.S., Alkousa M.S.
    Adaptive first-order methods for relatively strongly convex optimization problems
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 445-472

    The article is devoted to first-order adaptive methods for optimization problems with relatively strongly convex functionals. The concept of relatively strong convexity significantly extends the classical concept of convexity by replacing the Euclidean norm in the definition by the distance in a more general sense (more precisely, by Bregman’s divergence). An important feature of the considered classes of problems is the reduced requirements concerting the level of smoothness of objective functionals. More precisely, we consider relatively smooth and relatively Lipschitz-continuous objective functionals, which allows us to apply the proposed techniques for solving many applied problems, such as the intersection of the ellipsoids problem (IEP), the Support Vector Machine (SVM) for a binary classification problem, etc. If the objective functional is convex, the condition of relatively strong convexity can be satisfied using the problem regularization. In this work, we propose adaptive gradient-type methods for optimization problems with relatively strongly convex and relatively Lipschitzcontinuous functionals for the first time. Further, we propose universal methods for relatively strongly convex optimization problems. This technique is based on introducing an artificial inaccuracy into the optimization model, so the proposed methods can be applied both to the case of relatively smooth and relatively Lipschitz-continuous functionals. Additionally, we demonstrate the optimality of the proposed universal gradient-type methods up to the multiplication by a constant for both classes of relatively strongly convex problems. Also, we show how to apply the technique of restarts of the mirror descent algorithm to solve relatively Lipschitz-continuous optimization problems. Moreover, we prove the optimal estimate of the rate of convergence of such a technique. Also, we present the results of numerical experiments to compare the performance of the proposed methods.

  10. Chen J., Lobanov A.V., Rogozin A.V.
    Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480

    Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.

    We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"