All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Application of Turbulence Problem Solver (TPS) software complex for numerical modeling of the interaction between laser radiation and metals
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 619-630Views (last year): 15.The work is dedicated to the use of the software package Turbulence Problem Solver (TPS) for numerical simulation of a wide range of laser problems. The capabilities of the package are demonstrated by the example of numerical simulation of the interaction of femtosecond laser pulses with thin metal bonds. The software package TPS developed by the authors is intended for numerical solution of hyperbolic systems of differential equations on multiprocessor computing systems with distributed memory. The package is a modern and expandable software product. The architecture of the package gives the researcher the opportunity to model different physical processes in a uniform way, using different numerical methods and program blocks containing specific initial conditions, boundary conditions and source terms for each problem. The package provides the the opportunity to expand the functionality of the package by adding new classes of problems, computational methods, initial and boundary conditions, as well as equations of state of matter. The numerical methods implemented in the software package were tested on test problems in one-dimensional, two-dimensional and three-dimensional geometry, which included Riemann's problems on the decay of an arbitrary discontinuity with different configurations of the exact solution.
Thin films on substrates are an important class of targets for nanomodification of surfaces in plasmonics or sensor applications. Many articles are devoted to this subject. Most of them, however, focus on the dynamics of the film itself, paying little attention to the substrate, considering it simply as an object that absorbs the first compression wave and does not affect the surface structures that arise as a result of irradiation. The paper describes in detail a computational experiment on the numerical simulation of the interaction of a single ultrashort laser pulse with a gold film deposited on a thick glass substrate. The uniform rectangular grid and the first-order Godunov numerical method were used. The presented results of calculations allowed to confirm the theory of the shock-wave mechanism of holes formation in the metal under femtosecond laser action for the case of a thin gold film with a thickness of about 50 nm on a thick glass substrate.
-
Physical research, numerical and analytical modeling of explosion phenomena. A review
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 505-546The review considers a wide range of phenomena and problems associated with the explosion. Detailed numerical studies revealed an interesting physical effect — the formation of discrete vortex structures directly behind the front of a shock wave propagating in dense layers of a heterogeneous atmosphere. The necessity of further investigation of such phenomena and the determination of the degree of their connection with the possible development of gas-dynamic instability is shown. The brief analysis of numerous works on the thermal explosion of meteoroids during their high-speed movement in the Earth’s atmosphere is given. Much attention is paid to the development of a numerical algorithm for calculating the simultaneous explosion of several fragments of meteoroids and the features of the development of such a gas-dynamic flow are analyzed. The work shows that earlier developed algorithms for calculating explosions can be successfully used to study explosive volcanic eruptions. The paper presents and discusses the results of such studies for both continental and underwater volcanoes with certain restrictions on the conditions of volcanic activity.
The mathematical analysis is performed and the results of analytical studies of a number of important physical phenomena characteristic of explosions of high specific energy in the ionosphere are presented. It is shown that the preliminary laboratory physical modeling of the main processes that determine these phenomena is of fundamental importance for the development of sufficiently complete and adequate theoretical and numerical models of such complex phenomena as powerful plasma disturbances in the ionosphere. Laser plasma is the closest object for such a simulation. The results of the corresponding theoretical and experimental studies are presented and their scientific and practical significance is shown. The brief review of recent years on the use of laser radiation for laboratory physical modeling of the effects of a nuclear explosion on asteroid materials is given.
As a result of the analysis performed in the review, it was possible to separate and preliminarily formulate some interesting and scientifically significant questions that must be investigated on the basis of the ideas already obtained. These are finely dispersed chemically active systems formed during the release of volcanoes; small-scale vortex structures; generation of spontaneous magnetic fields due to the development of instabilities and their role in the transformation of plasma energy during its expansion in the ionosphere. It is also important to study a possible laboratory physical simulation of the thermal explosion of bodies under the influence of highspeed plasma flow, which has only theoretical interpretations.
-
Solving traveling salesman problem via clustering and a new algorithm for merging tours
Computer Research and Modeling, 2025, v. 17, no. 1, pp. 45-58Traditional methods for solving the traveling salesman problem are not effective for high-dimensional problems due to their high computational complexity. One of the most effective ways to solve this problem is the decomposition approach, which includes three main stages: clustering vertices, solving subproblems within each cluster and then merging the obtained solutions into a final solution. This article focuses on the third stage — merging cycles of solving subproblems — since this stage is not always given sufficient attention, which leads to less accurate final solutions of the problem. The paper proposes a new modified Sigal algorithm for merging cycles. To evaluate its effectiveness, it is compared with two algorithms for merging cycles — the method of connecting midpoints of edges and an algorithm based on closeness of cluster centroids. The dependence of quality of solving subproblems on algorithms used for merging cycles is investigated. Sigal’s modified algorithm performs pairwise clustering and minimizes total distance. The centroid method focuses on connecting clusters based on closeness of centroids, and an algorithm using mid-points estimates the distance between mid-points of edges. Two types of clustering — k-means and affinity propagation — were also considered. Numerical experiments were performed using the TSPLIB dataset with different numbers of cities and topologies to test effectiveness of proposed algorithm. The study analyzes errors caused by the order in which clusters were merged, the quality of solving subtasks and number of clusters. Experiments show that the modified Sigal algorithm has the smallest median final distance and the most stable results compared to other methods. Results indicate that the quality of the final solution obtained using the modified Sigal algorithm is more stable depending on the sequence of merging clusters. Improving the quality of solving subproblems usually results in linear improvement of the final solution, but the pooling algorithm rarely affects the degree of this improvement.
-
Modeling of deformation processes in structure of flexible woven composites
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 547-557Flexible woven composites are classified as high-tech innovative materials. Due to the combination of various components of the filler and reinforcement elements, such materials are used in construction, in the defense industry, in shipbuilding and aircraft construction, etc. In the domestic literature, insufficient attention is paid to woven composites that change their geometric structure of the reinforcing layer during deformation. This paper presents an analysis of the previously proposed complex approach to modeling the behavior of flexible woven composites under static uniaxial tension for further generalization of the approach to biaxial tension. The work is aimed at qualitative and quantitative description of mechanical deformation processes occurring in the structure of the studied materials under tension, which include straightening the strands of the reinforcing layer and increasing the value of mutual pressure of the cross-lying reinforcement strands. At the beginning of the deformation process, the straightening of the threads and the increase in mutual pressure of the threads are most intense. With the increase in the level of load, the change of these parameters slows down. For example, the bending of the reinforcement strands goes into the Central tension, and the value of the load from the mutual pressure is no longer increased (tends to constant). To simulate the described processes, the basic geometrical and mechanical parameters of the material affecting the process of forming are introduced, the necessary terminology and description of the characteristics are given. Due to the high geometric nonlinearity of the all processes described in the increments, as in the initial load values there is a significant deformation of the reinforcing layer. For the quantitative and qualitative description of mechanical deformation processes occurring in the reinforcing layer, analytical dependences are derived to determine the increment of the angle of straightening of reinforcement filaments and the load caused by the mutual pressure of the cross-lying filaments at each step of the load increment. For testing of obtained dependencies shows an example of their application for flexible woven composites brands VP4126, VP6131 and VP6545. The simulation results confirmed the assumptions about the processes of straightening the threads and slowing the increase in mutual pressure of the threads. The results and dependences presented in this paper are directly related to the further generalization of the previously proposed analytical models for biaxial tension, since stretching in two directions will significantly reduce the straightening of the threads and increase the amount of mutual pressure under similar loads.
-
Variance reduction for minimax problems with a small dimension of one of the variables
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.
-
Modifications of the Frank –Wolfe algorithm in the problem of finding the equilibrium distribution of traffic flows
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 53-68The paper presents various modifications of the Frank–Wolfe algorithm in the equilibrium traffic assignment problem. The Beckman model is used as a model for experiments. In this article, first of all, attention is paid to the choice of the direction of the basic step of the Frank–Wolfe algorithm. Algorithms will be presented: Conjugate Frank–Wolfe (CFW), Bi-conjugate Frank–Wolfe (BFW), Fukushima Frank –Wolfe (FFW). Each modification corresponds to different approaches to the choice of this direction. Some of these modifications are described in previous works of the authors. In this article, following algorithms will be proposed: N-conjugate Frank–Wolfe (NFW), Weighted Fukushima Frank–Wolfe (WFFW). These algorithms are some ideological continuation of the BFW and FFW algorithms. Thus, if the first algorithm used at each iteration the last two directions of the previous iterations to select the next direction conjugate to them, then the proposed algorithm NFW is using more than N previous directions. In the case of Fukushima Frank–Wolfe, the average of several previous directions is taken as the next direction. According to this algorithm, a modification WFFW is proposed, which uses a exponential smoothing from previous directions. For comparative analysis, experiments with various modifications were carried out on several data sets representing urban structures and taken from publicly available sources. The relative gap value was taken as the quality metric. The experimental results showed the advantage of algorithms using the previous directions for step selection over the classic Frank–Wolfe algorithm. In addition, an improvement in efficiency was revealed when using more than two conjugate directions. For example, on various datasets, the modification 3FW showed the best convergence. In addition, the proposed modification WFFW often overtook FFW and CFW, although performed worse than NFW.
-
Identification of an object model in the presence of unknown disturbances with a wide frequency range based on the transition to signal increments and data sampling
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 315-337The work is devoted to the problem of creating a model with stationary parameters using historical data under conditions of unknown disturbances. The case is considered when a representative sample of object states can be formed using historical data accumulated only over a significant period of time. It is assumed that unknown disturbances can act in a wide frequency range and may have low-frequency and trend components. In such a situation, including data from different time periods in the sample can lead to inconsistencies and greatly reduce the accuracy of the model. The paper provides an overview of approaches and methods for data harmonization. In this case, the main attention is paid to data sampling. An assessment is made of the applicability of various data sampling options as a tool for reducing the level of uncertainty. We propose a method for identifying a self-leveling object model using data accumulated over a significant period of time under conditions of unknown disturbances with a wide frequency range. The method is focused on creating a model with stationary parameters that does not require periodic reconfiguration to new conditions. The method is based on the combined use of sampling and presentation of data from individual periods of time in the form of increments relative to the initial point in time for the period. This makes it possible to reduce the number of parameters that characterize unknown disturbances with a minimum of assumptions that limit the application of the method. As a result, the dimensionality of the search problem is reduced and the computational costs associated with setting up the model are minimized. It is possible to configure both linear and, in some cases, nonlinear models. The method was used to develop a model of closed cooling of steel on a unit for continuous hot-dip galvanizing of steel strip. The model can be used for predictive control of thermal processes and for selecting strip speed. It is shown that the method makes it possible to develop a model of thermal processes from a closed cooling section under conditions of unknown disturbances, including low-frequency components.
-
Comparison of mobile operating systems based on models of growth reliability of the software
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 325-334Views (last year): 29.Evaluation of software reliability is an important part of the process of developing modern software. Many studies are aimed at improving models for measuring and predicting the reliability of software products. However, little attention is paid to approaches to comparing existing systems in terms of software reliability. Despite the enormous importance for practice (and for managing software development), a complete and proven comparison methodology does not exist. In this article, we propose a software reliability comparison methodology in which software reliability growth models are widely used. The proposed methodology has the following features: it provides certain level of flexibility and abstraction while keeping objectivity, i.e. providing measurable comparison criteria. Also, given the comparison methodology with a set of SRGMs and evaluation criteria it becomes much easier to disseminate information about reliability of wide range of software systems. The methodology was evaluated on the example of three mobile operating systems with open source: Sailfish, Tizen, CyanogenMod.
A byproduct of our study is a comparison of the three analyzed Open Source mobile operating systems. The goal of this research is to determine which OS is stronger in terms of reliability. To this end we have performed a GQM analysis and we have identified 3 questions and 8 metrics. Considering the comparison of metrics, it appears that Sailfish is in most case the best performing OS. However, it is also the OS that performs the worst in most cases. On the contrary, Tizen scores the best in 3 cases out of 8, but the worst only in one case out of 8.
-
Algorithms of through calculation for damage processes
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 645-666Views (last year): 24.The paper reviews the existing approaches to calculating the destruction of solids. The main attention is paid to algorithms using a unified approach to the calculation of deformation both for nondestructive and for the destroyed states of the material. The thermodynamic derivation of the unified rheological relationships taking into account the elastic, viscous and plastic properties of materials and describing the loss of the deformation resistance ability with the accumulation of microdamages is presented. It is shown that the mathematical model under consideration provides a continuous dependence of the solution on input parameters (parameters of the material medium, initial and boundary conditions, discretization parameters) with softening of the material.
Explicit and implicit non-matrix algorithms for calculating the evolution of deformation and fracture development are presented. Non-explicit schemes are implemented using iterations of the conjugate gradient method, with the calculation of each iteration exactly coinciding with the calculation of the time step for two-layer explicit schemes. So, the solution algorithms are very simple.
The results of solving typical problems of destruction of solid deformable bodies for slow (quasistatic) and fast (dynamic) deformation processes are presented. Based on the experience of calculations, recommendations are given for modeling the processes of destruction and ensuring the reliability of numerical solutions.
-
First-order optimization methods are workhorses in a wide range of modern applications in economics, physics, biology, machine learning, control, and other fields. Among other first-order methods accelerated and momentum ones obtain special attention because of their practical efficiency. The heavy-ball method (HB) is one of the first momentum methods. The method was proposed in 1964 and the first analysis was conducted for quadratic strongly convex functions. Since then a number of variations of HB have been proposed and analyzed. In particular, HB is known for its simplicity in implementation and its performance on nonconvex problems. However, as other momentum methods, it has nonmonotone behavior, and for optimal parameters, the method suffers from the so-called peak effect. To address this issue, in this paper, we consider an averaged version of the heavy-ball method (AHB). We show that for quadratic problems AHB has a smaller maximal deviation from the solution than HB. Moreover, for general convex and strongly convex functions, we prove non-accelerated rates of global convergence of AHB, its weighted version WAHB, and for AHB with restarts R-AHB. To the best of our knowledge, such guarantees for HB with averaging were not explicitly proven for strongly convex problems in the existing works. Finally, we conduct several numerical experiments on minimizing quadratic and nonquadratic functions to demonstrate the advantages of using averaging for HB. Moreover, we also tested one more modification of AHB called the tail-averaged heavy-ball method (TAHB). In the experiments, we observed that HB with a properly adjusted averaging scheme converges faster than HB without averaging and has smaller oscillations.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"