Результаты поиска по 'global optimization':
Найдено статей: 18
  1. Nikonov E.G., Nazmitdinov R.G., Glukhovtsev P.I.
    Molecular dynamics studies of equilibrium configurations of equally charged particles in planar systems with circular symmetry
    Computer Research and Modeling, 2022, v. 14, no. 3, pp. 609-618

    The equilibrium configurations of charged electrons, confined in the hard disk potential, are analysed by means of the hybrid numerical algorithm. The algorithm is based on the interpolation formulas, that are obtained from the analysis of the equilibrium configurations, provided by the variational principle developed in the circular model. The solution of the nonlinear equations of the circular model yields the formation of the shell structure which is composed of the series of rings. Each ring contains a certain number of particles, which decreases as one moves from the boundary ring to the central one. The number of rings depends on the total number of electrons. The interpolation formulas provide the initial configurations for the molecular dynamics calculations. This approach makes it possible to significantly increase the speed at which an equilibrium configuration is reached for an arbitrarily chosen number of particles compared to the Metropolis annealing simulation algorithm and other algorithms based on global optimization methods.

  2. Ososkov G.A., Bakina O.V., Baranov D.A., Goncharov P.V., Denisenko I.I., Zhemchugov A.S., Nefedov Y.A., Nechaevskiy A.V., Nikolskaya A.N., Shchavelev E.M., Wang L., Sun S., Zhang Y.
    Tracking on the BESIII CGEM inner detector using deep learning
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1361-1381

    The reconstruction of charged particle trajectories in tracking detectors is a key problem in the analysis of experimental data for high energy and nuclear physics.

    The amount of data in modern experiments is so large that classical tracking methods such as Kalman filter can not process them fast enough. To solve this problem, we have developed two neural network algorithms of track recognition, based on deep learning architectures, for local (track by track) and global (all tracks in an event) tracking in the GEM tracker of the BM@N experiment at JINR (Dubna). The advantage of deep neural networks is the ability to detect hidden nonlinear dependencies in data and the capability of parallel execution of underlying linear algebra operations.

    In this work we generalize these algorithms to the cylindrical GEM inner tracker of BESIII experiment. The neural network model RDGraphNet for global track finding, based on the reverse directed graph, has been successfully adapted. After training on Monte Carlo data, testing showed encouraging results: recall of 98% and precision of 86% for track finding.

    The local neural network model TrackNETv2 was also adapted to BESIII CGEM successfully. Since the tracker has only three detecting layers, an additional neuro-classifier to filter out false tracks have been introduced. Preliminary tests demonstrated the recall value at the first stage of 99%. After applying the neuro-classifier, the precision was 77% with a slight decrease of the recall to 94%. This result can be improved after the further model optimization.

  3. Zhabitskaya E.I., Zhabitsky M.V., Zemlyanay E.V., Lukyanov K.V.
    Calculation of the parameters of microscopic optical potential for pionnuclei elastic scattering by Asynchronous Differential Evolution algorithm
    Computer Research and Modeling, 2012, v. 4, no. 3, pp. 585-595

    New Asynchronous Differential Evolution algorithm is used to determine the parameters of microscopic optical potential of elastic pion scattering on 28Si, 58Ni and 208Pb nuclei at energy 130, 162 and 180 MeV.

    Views (last year): 1. Citations: 3 (RSCI).
  4. Parkhomenko P.V.
    Pareto optimal analysis of global warming prevention by geoengineering methods
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1097-1108

    The study is based on a three-dimensional hydrodynamic global climate coupled model, including ocean model with real depths and continents configuration, sea ice evolution model and energy and moisture balance atmosphere model. Aerosol concentration from the year 2010 to 2100 is calculated as a controlling parameter to stabilize mean year surface air temperature. It is shown that by this way it is impossible to achieve the space and seasonal uniform approximation to the existing climate, although it is possible significantly reduce the greenhouse warming effect. Climate will be colder at 0.1–0.2 degrees in the low and mid-latitudes and at high latitudes it will be warmer at 0.2–1.2 degrees. The Pareto frontier is investigated and visualized for two parameters — atmospheric temperature mean square deviation for the winter and summer seasons. The Pareto optimal amount of sulfur emissions would be between 23.5 and 26.5 TgS/year.

    Views (last year): 1. Citations: 3 (RSCI).
  5. Ostroukhov P.A., Kamalov R.A., Dvurechensky P.E., Gasnikov A.V.
    Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 357-376

    In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.

    For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.

    Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our aforementioned algorithm with a global linear convergence and a local quadratic convergence rate.

    Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.

  6. Zabotin, V.I., Chernyshevskij P.A.
    Extension of Strongin’s Global Optimization Algorithm to a Function Continuous on a Compact Interval
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1111-1119

    The Lipschitz continuous property has been used for a long time to solve the global optimization problem and continues to be used. Here we can mention the work of Piyavskii, Yevtushenko, Strongin, Shubert, Sergeyev, Kvasov and others. Most papers assume a priori knowledge of the Lipschitz constant, but the derivation of this constant is a separate problem. Further still, we must prove that an objective function is really Lipschitz, and it is a complicated problem too. In the case where the Lipschitz continuity is established, Strongin proposed an algorithm for global optimization of a satisfying Lipschitz condition on a compact interval function without any a priori knowledge of the Lipschitz estimate. The algorithm not only finds a global extremum, but it determines the Lipschitz estimate too. It is known that every function that satisfies the Lipchitz condition on a compact convex set is uniformly continuous, but the reverse is not always true. However, there exist models (Arutyunova, Dulliev, Zabotin) whose study requires a minimization of the continuous but definitely not Lipschitz function. One of the algorithms for solving such a problem was proposed by R. J. Vanderbei. In his work he introduced some generalization of the Lipchitz property named $\varepsilon$-Lipchitz and proved that a function defined on a compact convex set is uniformly continuous if and only if it satisfies the $\varepsilon$-Lipchitz condition. The above-mentioned property allowed him to extend Piyavskii’s method. However, Vanderbei assumed that for a given value of $\varepsilon$ it is possible to obtain an associate Lipschitz $\varepsilon$-constant, which is a very difficult problem. Thus, there is a need to construct, for a function continuous on a compact convex domain, a global optimization algorithm which works in some way like Strongin’s algorithm, i.e., without any a priori knowledge of the Lipschitz $\varepsilon$-constant. In this paper we propose an extension of Strongin’s global optimization algorithm to a function continuous on a compact interval using the $\varepsilon$-Lipchitz conception, prove its convergence and solve some numerical examples using the software that implements the developed method.

  7. Ablaev S.S., Makarenko D.V., Stonyakin F.S., Alkousa M.S., Baran I.V.
    Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 473-495

    Non-smooth optimization often arises in many applied problems. The issues of developing efficient computational procedures for such problems in high-dimensional spaces are very topical. First-order methods (subgradient methods) are well applicable here, but in fairly general situations they lead to low speed guarantees for large-scale problems. One of the approaches to this type of problem can be to identify a subclass of non-smooth problems that allow relatively optimistic results on the rate of convergence. For example, one of the options for additional assumptions can be the condition of a sharp minimum, proposed in the late 1960s by B. T. Polyak. In the case of the availability of information about the minimal value of the function for Lipschitz-continuous problems with a sharp minimum, it turned out to be possible to propose a subgradient method with a Polyak step-size, which guarantees a linear rate of convergence in the argument. This approach made it possible to cover a number of important applied problems (for example, the problem of projecting onto a convex compact set). However, both the condition of the availability of the minimal value of the function and the condition of a sharp minimum itself look rather restrictive. In this regard, in this paper, we propose a generalized condition for a sharp minimum, somewhat similar to the inexact oracle proposed recently by Devolder – Glineur – Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover, the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex nonsmooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak $\beta$-quasi-convexity and ordinary quasiconvexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the $\delta$-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence onto the feasible set of the problem.

  8. Molecular dynamic methods that use ReaxFF force field allow one to obtain sufficiently good results in simulating large multicomponent chemically reactive systems. Here is represented an algorithm of searching optimal parameters of molecular-dynamic force field ReaxFF for arbitrary chemical systems and its implementation. The method is based on the multidimensional technique of global minimum search suggested by R.G. Strongin. It has good scalability useful for running on distributed parallel computers.

    Views (last year): 1. Citations: 1 (RSCI).
Pages: previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"