All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
First-order optimization methods are workhorses in a wide range of modern applications in economics, physics, biology, machine learning, control, and other fields. Among other first-order methods accelerated and momentum ones obtain special attention because of their practical efficiency. The heavy-ball method (HB) is one of the first momentum methods. The method was proposed in 1964 and the first analysis was conducted for quadratic strongly convex functions. Since then a number of variations of HB have been proposed and analyzed. In particular, HB is known for its simplicity in implementation and its performance on nonconvex problems. However, as other momentum methods, it has nonmonotone behavior, and for optimal parameters, the method suffers from the so-called peak effect. To address this issue, in this paper, we consider an averaged version of the heavy-ball method (AHB). We show that for quadratic problems AHB has a smaller maximal deviation from the solution than HB. Moreover, for general convex and strongly convex functions, we prove non-accelerated rates of global convergence of AHB, its weighted version WAHB, and for AHB with restarts R-AHB. To the best of our knowledge, such guarantees for HB with averaging were not explicitly proven for strongly convex problems in the existing works. Finally, we conduct several numerical experiments on minimizing quadratic and nonquadratic functions to demonstrate the advantages of using averaging for HB. Moreover, we also tested one more modification of AHB called the tail-averaged heavy-ball method (TAHB). In the experiments, we observed that HB with a properly adjusted averaging scheme converges faster than HB without averaging and has smaller oscillations.
-
Algorithm for vortices identification based on flow velocity vectors using the simplest mathematical model of vortex dynamics
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1477-1493An algorithm is proposed to identify parameters of a 2D vortex structure used on information about the flow velocity at a finite (small) set of reference points. The approach is based on using a set of point vortices as a model system and minimizing a functional that compares the model and known sets of velocity vectors in the space of model parameters. For numerical implementation, the method of gradient descent with step size control, approximation of derivatives by finite differences, and the analytical expression of the velocity field induced by the point vortex model are used. An experimental analysis of the operation of the algorithm on test flows is carried out: one and a system of several point vortices, a Rankine vortex, and a Lamb dipole. According to the velocity fields of test flows, the velocity vectors utilized for identification were arranged in a randomly distributed set of reference points (from 3 to 200 pieces). Using the computations, it was determined that: the algorithm converges to the minimum from a wide range of initial approximations; the algorithm converges in all cases when the reference points are located in areas where the streamlines of the test and model systems are topologically equivalent; if the streamlines of the systems are not topologically equivalent, then the percentage of successful calculations decreases, but convergence can also take place; when the method converges, the coordinates of the vortices of the model system are close to the centers of the vortices of the test configurations, and in many cases, the values of their circulations also; con-vergence depends more on location than on the number of vectors used for identification. The results of the study allow us to recommend the proposed algorithm for identifying 2D vortex structures whose streamlines are topologically close to systems of point vortices.
-
Optimal control of the motion in an ideal fluid of a screw-shaped body with internal rotors
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 741-759Views (last year): 12. Citations: 1 (RSCI).In this paper we consider the controlled motion of a helical body with three blades in an ideal fluid, which is executed by rotating three internal rotors. We set the problem of selecting control actions, which ensure the motion of the body near the predetermined trajectory. To determine controls that guarantee motion near the given curve, we propose methods based on the application of hybrid genetic algorithms (genetic algorithms with real encoding and with additional learning of the leader of the population by a gradient method) and artificial neural networks. The correctness of the operation of the proposed numerical methods is estimated using previously obtained differential equations, which define the law of changing the control actions for the predetermined trajectory.
In the approach based on hybrid genetic algorithms, the initial problem of minimizing the integral functional reduces to minimizing the function of many variables. The given time interval is broken up into small elements, on each of which the control actions are approximated by Lagrangian polynomials of order 2 and 3. When appropriately adjusted, the hybrid genetic algorithms reproduce a solution close to exact. However, the cost of calculation of 1 second of the physical process is about 300 seconds of processor time.
To increase the speed of calculation of control actions, we propose an algorithm based on artificial neural networks. As the input signal the neural network takes the components of the required displacement vector. The node values of the Lagrangian polynomials which approximately describe the control actions return as output signals . The neural network is taught by the well-known back-propagation method. The learning sample is generated using the approach based on hybrid genetic algorithms. The calculation of 1 second of the physical process by means of the neural network requires about 0.004 seconds of processor time, that is, 6 orders faster than the hybrid genetic algorithm. The control calculated by means of the artificial neural network differs from exact control. However, in spite of this difference, it ensures that the predetermined trajectory is followed exactly.
-
Estimation of natural frequencies of pure bending vibrations of composite nonlinearly elastic beams and circular plates
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 945-953Views (last year): 14.In the paper, it is represented a linearization method for the stress-strain curves of nonlinearly deformable beams and circular plates in order to generalize the pure bending vibration equations. It is considered composite, on average isotropic prismatic beams of a constant rectangular cross-section and circular plates of a constant thickness made of nonlinearly elastic materials. The technique consists in determining the approximate Young’s moduli from the initial stress-strain state of beam and plate subjected to the action of the bending moment.
The paper proposes two criteria for linearization: the equality of the specific potential energy of deformation and the minimization of the standard deviation in the state equation approximation. The method allows obtaining in the closed form the estimated value of the natural frequencies of layered and structurally heterogeneous, on average isotropic nonlinearly elastic beams and circular plates. This makes it possible to significantly reduce the resources in the vibration analysis and modeling of these structural elements. In addition, the paper shows that the proposed linearization criteria allow to estimate the natural frequencies with the same accuracy.
Since in the general case even isotropic materials exhibit different resistance to tension and compression, it is considered the piecewise-linear Prandtl’s diagrams with proportionality limits and tangential Young’s moduli that differ under tension and compression as the stress-strain curves of the composite material components. As parameters of the stress-strain curve, it is considered the effective Voigt’s characteristics (under the hypothesis of strain homogeneity) for a longitudinally layered material structure; the effective Reuss’ characteristics (under the hypothesis of strain homogeneity) for a transversely layered beam and an axially laminated plate. In addition, the effective Young’s moduli and the proportionality limits, obtained by the author’s homogenization method, are given for a structurally heterogeneous, on average isotropic material. As an example, it is calculated the natural frequencies of two-phase beams depending on the component concentrations.
-
Searching stochastic equilibria in transport networks by universal primal-dual gradient method
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345Views (last year): 28.We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.
-
Evaluation of the scalability property of the program for the simulation of atmospheric chemical transport by means of the simulator gem5
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 773-794In this work we have developed a new efficient program for the numerical simulation of 3D global chemical transport on an adaptive finite-difference grid which allows us to concentrate grid points in the regions where flow variables sharply change and coarsen the grid in the regions of their smooth behavior, which significantly minimizes the grid size. We represent the adaptive grid with a combination of several dynamic (tree, linked list) and static (array) data structures. The dynamic data structures are used for a grid reconstruction, and the calculations of the flow variables are based on the static data structures. The introduction of the static data structures allows us to speed up the program by a factor of 2 in comparison with the conventional approach to the grid representation with only dynamic data structures.
We wrote and tested our program on a computer with 6 CPU cores. Using the computer microarchitecture simulator gem5, we estimated the scalability property of the program on a significantly greater number of cores (up to 32), using several models of a computer system with the design “computational cores – cache – main memory”. It has been shown that the microarchitecture of a computer system has a significant impact on the scalability property, i.e. the same program demonstrates different efficiency on different computer microarchitectures. For example, we have a speedup of 14.2 on a processor with 32 cores and 2 cache levels, but we have a speedup of 22.2 on a processor with 32 cores and 3 cache levels. The execution time of a program on a computer model in gem5 is 104–105 times greater than the execution time of the same program on a real computer and equals 1.5 hours for the most complex model.
Also in this work we describe how to configure gem5 and how to perform simulations with gem5 in the most optimal way.
-
A hybrid multi-objective carpool route optimization technique using genetic algorithm and A* algorithm
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 67-85Carpooling has gained considerable importance as an effective solution for reducing pollution, mitigation of traffic and congestion on the roads, reduced demand for parking facilities, lesser energy and fuel consumption and most importantly, reduction in carbon emission, thus improving the quality of life in cities. This work presents a hybrid GA-A* algorithm to obtain optimal routes for the carpooling problem in the domain of multiobjective optimization having multiple conflicting objectives. Though the Genetic Algorithm provides optimal solutions, the A* algorithm because of its efficiency in providing the shortest route between any two points based on heuristics, enhances the optimal routes obtained using the Genetic algorithm. The refined routes obtained using the GA-A* algorithm, are further subjected to dominance test to obtain non-dominating solutions based on Pareto-Optimality. The routes obtained maximize the profit of the service provider by minimizing the travel and detour distance as well as pick-up/drop costs while maximizing the utilization of the car. The proposed algorithm has been implemented over the Salt Lake area of Kolkata. Route distance and detour distance for the optimal routes obtained using the proposed algorithm are consistently lesser for the same number of passengers when compared to the corresponding results obtained from an existing algorithm. Various statistical analysis like boxplots have also confirmed that the proposed algorithm regularly performed better than the existing algorithm using only Genetic Algorithm.
-
A hybrid regularizers approach based model for restoring image corrupted by Poisson noise
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 965-978Image denoising is one of the fundamental problems in digital image processing. This problem usually refers to the reconstruction of an image from an observed image degraded by noise. There are many factors that cause this degradation such as transceiver equipment, or environmental influences, etc. In order to obtain higher quality images, many methods have been proposed for image denoising problem. Most image denoising method are based on total variation (TV) regularization to develop efficient algorithms for solving the related optimization problem. TV-based models have become a standard technique in image restoration with the ability to preserve image sharpness.
In this paper, we focus on Poisson noise usually appearing in photon-counting devices. We propose an effective regularization model based on combination of first-order and fractional-order total variation for image reconstruction corrupted by Poisson noise. The proposed model allows us to eliminate noise while edge preserving. An efficient alternating minimization algorithm is employed to solve the optimization problem. Finally, provided numerical results show that our proposed model can preserve more details and get higher image visual quality than recent state-of-the-art methods.
-
On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.
In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.
-
Mathematical modeling of thermophysical processes in the wall of the Baker cyst, when intra-cystic fluid is heated by laser radiation 1.47 μm in length
Computer Research and Modeling, 2018, v. 10, no. 1, pp. 103-112Views (last year): 21. Citations: 2 (RSCI).The work is devoted to the study of the theoretical value of destructive influence on normal tissues of an organism by infrared radiation that goes beyond the treated pathological focus. This situation is possible if the direct laser radiation on the tissues is extremely long-acting. The solution to this problem can be the uniform distribution of heat inside the volume through indirect heating of the liquid, which contributes to minimal damage to the perifocal structures. A non-stationary thermophysical model of the process of heat propagation in biological tissues is presented, allowing to carry out studies of energy transfer from internal liquid contents of Baker's cyst heated by infrared laser radiation of a given specific power through a certain thickness of its wall to surrounding biological tissues. Calculation of the spacetime temperature distribution in the cyst wall and surrounding fat tissue is carried out by the finite-difference method. The time of effective exposure to temperature on the entire thickness of the cyst wall was estimated to be 55 ° C on its outer surface. The safety procedure ensures the exposure duration of this value is not more than 10 seconds.
As a result of the calculations carried out, it is established that there are several operating modes of a surgical laser that meet all the safety requirements with a simultaneous effective procedure. Local one-sided hyperthermia of the synovial membrane and subsequent coagulation of the entire wall thickness due to heat transfer contributes to the elimination of the cavity neoplasm of the popliteal region. With a thickness of 3 mm, the heating mode is satisfactory, under which the exposure time lasts about 200 seconds, and the specific power of the laser radiation in the internal medium of the liquid contents of the Baker cyst is approximately 1.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"