All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Linearly convergent gradient-free methods for minimization of parabolic approximation
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 239-255Finding the global minimum of a nonconvex function is one of the key and most difficult problems of the modern optimization. In this paper we consider special classes of nonconvex problems which have a clear and distinct global minimum.
In the first part of the paper we consider two classes of «good» nonconvex functions, which can be bounded below and above by a parabolic function. This class of problems has not been widely studied in the literature, although it is rather interesting from an applied point of view. Moreover, for such problems first-order and higher-order methods may be completely ineffective in finding a global minimum. This is due to the fact that the function may oscillate heavily or may be very noisy. Therefore, our new methods use only zero-order information and are based on grid search. The size and fineness of this grid, and hence the guarantee of convergence speed and oracle complexity, depend on the «goodness» of the problem. In particular, we show that if the function is bounded by fairly close parabolic functions, then the complexity is independent of the dimension of the problem. We show that our new methods converge with a linear convergence rate $\log(1/\varepsilon)$ to a global minimum on the cube.
In the second part of the paper, we consider the nonconvex optimization problem from a different angle. We assume that the target minimizing function is the sum of the convex quadratic problem and a nonconvex «noise» function proportional to the distance to the global solution. Considering functions with such noise assumptions for zero-order methods is new in the literature. For such a problem, we use the classical gradient-free approach with gradient approximation through finite differences. We show how the convergence analysis for our problems can be reduced to the standard analysis for convex optimization problems. In particular, we achieve a linear convergence rate for such problems as well.
Experimental results confirm the efficiency and practical applicability of all the obtained methods.
-
Solution to a two-dimensional nonlinear heat equation using null field method
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1449-1467The paper deals with a heat wave motion problem for a degenerate second-order nonlinear parabolic equation with power nonlinearity. The considered boundary condition specifies in a plane the motion equation of the circular zero front of the heat wave. A new numerical-analytical algorithm for solving the problem is proposed. A solution is constructed stepby- step in time using difference time discretization. At each time step, a boundary value problem for the Poisson equation corresponding to the original equation at a fixed time is considered. This problem is, in fact, an inverse Cauchy problem in the domain whose initial boundary is free of boundary conditions and two boundary conditions (Neumann and Dirichlet) are specified on a current boundary (heat wave). A solution of this problem is constructed as the sum of a particular solution to the nonhomogeneous Poisson equation and a solution to the corresponding Laplace equation satisfying the boundary conditions. Since the inhomogeneity depends on the desired function and its derivatives, an iterative solution procedure is used. The particular solution is sought by the collocation method using inhomogeneity expansion in radial basis functions. The inverse Cauchy problem for the Laplace equation is solved by the null field method as applied to a circular domain with a circular hole. This method is used for the first time to solve such problem. The calculation algorithm is optimized by parallelizing the computations. The parallelization of the computations allows us to realize effectively the algorithm on high performance computing servers. The algorithm is implemented as a program, which is parallelized by using the OpenMP standard for the C++ language, suitable for calculations with parallel cycles. The effectiveness of the algorithm and the robustness of the program are tested by the comparison of the calculation results with the known exact solution as well as with the numerical solution obtained earlier by the authors with the use of the boundary element method. The implemented computational experiment shows good convergence of the iteration processes and higher calculation accuracy of the proposed new algorithm than of the previously developed one. The solution analysis allows us to select the radial basis functions which are most suitable for the proposed algorithm.
-
Detecting Braess paradox in the stable dynamic model
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 35-51The work investigates the search for inefficient edges in the model of stable dynamics by Nestrov – de Palma (2003). For this purpose, we prove several general theorems about equilibrium properties, including the condition of equal costs for all used routes that can be extended to all paths involving edges from equilibrium routes. The study demonstrates that the standard problem formulation of finding edges whose removal reduces the cost of travel for all participants has no practical significance because the same edge can be both efficient and inefficient depending on the network’s load. In the work, we introduce the concept of an inefficient edge based on the sensitivity of total driver costs to the costs on the edge. The paper provides an algorithm for finding inefficient edges and presents the results of numerical experiments for the transportation network of the city of Anaheim.
Keywords: transportation modeling, Braess paradox. -
Quantitative assessment of seismic risk and energy concepts of earthquake engineering
Computer Research and Modeling, 2018, v. 10, no. 1, pp. 61-76Currently, earthquake-resistant design of buildings based on the power calculation and presentation of effect of the earthquake static equivalent forces, which are calculated using elastic response spectra (linear-spectral method) that connects the law of motion of the soil with the absolute acceleration of the model in a nonlinear oscillator.
This approach does not directly take into account either the influence of the duration of strong motion or the plastic behavior of the structure. Frequency content and duration of ground vibrations directly affect the energy received by the building and causing damage to its elements. Unlike power or kinematic calculation of the seismic effect on the structure can be interpreted without considering separately the forces and displacements and to provide, as the product of both variables, i.e., the work or input energy (maximum energy that can be purchased building to the earthquake).
With the energy approach of seismic design, it is necessary to evaluate the input seismic energy in the structure and its distribution among various structural components.
The article provides substantiation of the energy approach in the design of earthquake-resistant buildings and structures instead of the currently used method based on the power calculation and presentation of effect of the earthquake static equivalent forces, which are calculated using spectra of the reaction.
Noted that interest in the use of energy concepts in earthquake-resistant design began with the works of Housner, which provided the seismic force in the form of the input seismic energy, using the range of speeds, and suggested that the damage in elastic-plastic system and elastic system causes one and the same input seismic energy.
The indices of the determination of the input energy of the earthquake, proposed by various authors, are given in this paper. It is shown that modern approaches to ensuring seismic stability of structures, based on the representation of the earthquake effect as a static equivalent force, do not adequately describe the behavior of the system during an earthquake.
In this paper, based on quantitative estimates of seismic risk analyzes developed in the NRU MSUCE Standard Organization (STO) “Seismic resistance structures. The main design provisions”. In the developed document a step forward with respect to the optimal design of earthquake-resistant structures.
The proposed concept of using the achievements of modern methods of calculation of buildings and structures on seismic effects, which are harmonized with the Eurocodes and are not contrary to the system of national regulations.
Keywords: the earthquake resistance of buildings, the energy method, earthquake-resistant construction, spectra response, the input earthquake energy, earthquake recurrence period, seismic risk, anti-seismic measures, conceptual design, two-tiered calculation, seismic resistance criteria, nonlinear static and nonlinear dynamic calculation method.Views (last year): 21. -
Mathematical simulation of vortex motion in the astrophysical objects on the basis of the gas-dynamic model
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 631-643Views (last year): 27.The application of a conservative numerical method of fluxes is examined for studying the vortex structures in the massive, fast-turned compact astrophysical objects, which are in self-gravity conditions. The simulation is accomplished for the objects with different mass and rotational speed. The pictures of the vortex structure of objects are visualized. In the calculations the gas-dynamic model is used, in which gas is accepted perfected and nonviscous. Numerical procedure is based on the finite-difference approximation of the conservation laws of the additive characteristics of medium for the finite volume. The “upwind” approximations of the densities of distribution of mass, components of momentum and total energy are applied. For the simulation of the objects, which possess fast-spin motion, the control of conservation for the component of moment of momentun is carried out during calculation. Evolutionary calculation is carried out on the basis of the parallel algorithms, realized on the computer complex of cluster architecture. Algorithms are based on the standardized system of message transfer Message Passing Interface (MPI). The blocking procedures of exchange and non-blocking procedures of exchange with control of the completion of operation are used. The parallelization on the space in two or three directions is carried out depending on the size of integration area and parameters of computational grid. For each subarea the parallelization based on the physical factors is carried out also: the calculations of gas dynamics part and gravitational forces are realized on the different processors, that allows to raise the efficiency of algorithms. The real possibility of the direct calculation of gravitational forces by means of the summation of interaction between all finite volumes in the integration area is shown. For the finite volume methods this approach seems to more consecutive than the solution of Poisson’s equation for the gravitational potential. Numerical calculations were carried out on the computer complex of cluster architecture with the peak productivity 523 TFlops. In the calculations up to thousand processors was used.
-
Modified Gauss–Newton method for solving a smooth system of nonlinear equations
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 697-723In this paper, we introduce a new version of Gauss–Newton method for solving a system of nonlinear equations based on ideas of the residual upper bound for a system of nonlinear equations and a quadratic regularization term. The introduced Gauss–Newton method in practice virtually forms the whole parameterized family of the methods solving systems of nonlinear equations and regression problems. The developed family of Gauss–Newton methods completely consists of iterative methods with generalization for cases of non-euclidean normed spaces, including special forms of Levenberg–Marquardt algorithms. The developed methods use the local model based on a parameterized proximal mapping allowing us to use an inexact oracle of «black–box» form with restrictions for the computational precision and computational complexity. We perform an efficiency analysis including global and local convergence for the developed family of methods with an arbitrary oracle in terms of iteration complexity, precision and complexity of both local model and oracle, problem dimensionality. We present global sublinear convergence rates for methods of the proposed family for solving a system of nonlinear equations, consisting of Lipschitz smooth functions. We prove local superlinear convergence under extra natural non-degeneracy assumptions for system of nonlinear functions. We prove both local and global linear convergence for a system of nonlinear equations under Polyak–Lojasiewicz condition for proposed Gauss– Newton methods. Besides theoretical justifications of methods we also consider practical implementation issues. In particular, for conducted experiments we present effective computational schemes for the exact oracle regarding to the dimensionality of a problem. The proposed family of methods unites several existing and frequent in practice Gauss–Newton method modifications, allowing us to construct a flexible and convenient method implementable using standard convex optimization and computational linear algebra techniques.
-
Approaches to creating precise geometric models of steel wire ropes in the Gmsh environment using the OpenCascade Core Technology engine
Computer Research and Modeling, 2024, v. 16, no. 6, pp. 1399-1415A review of the problems of preparing accurate geometric models of steel ropes based on mathematical models without significant simplifications, taking into account the intended purpose of the model, is carried out. Possible approaches to the generation of precise geometric models of steel ropes that have no fundamental limitations on their integration in computational domains and the subsequent construction of finite element models based on them are shown. A generalized parameterized geometric model of single and double twist ropes and its algorithmic implementation using the OpenCASCADE Core Technology geometric modeling kernel in the Gmsh environment (open source software) is considered. The problems of using generic tabular data from steel rope assortment standards as initial data for constructing geometric models are considered. Methods of preliminary verification of collisions of a geometric model based on the initial data of a geometric model are given. Post-verification methods based on Boolean operations over rope wire bodies are given to identify incorrect results of generating models of wire bodies with curvilinear side surfaces based on the algorithm of sequential hierarchical construction of individual wires of single strand and sequential copying of it. Various methods of the process of constructing geometric models of rope wires by extrusion are shown: through a sequence of generatrix with the formation of a body limited by curvilinear surfaces, through a sequence of generatrix with the formation of a body limited by linearly approximated surfaces, and extrusion of one generatrix along a single guideline. The computational complexity of the geometric model generation and the required volume of RAM for the two most universal methods of creating a body of wire are investigated. A method for estimating the value of the step of the arrangement of the generatrix of a single wire is shown, and the influence of its value on the computational complexity of the procedure of wire construction is investigated. Recommendations are given for choosing the value of the radial gap between the layers of wires. An algorithmic implementation of the method for searching for collisions of a geometric model of a steel rope in a non-interactive mode is shown. Approaches to the formation of procedures for processing collisions are proposed. Approaches presented in the article can be implemented in the form of software modules for execution in the Gmsh environment, as well as for another environment using the OpenCascade Core Technology geometric modeling kernel. Such modules allow automation of the construction of accurate geometric models of steel ropes in any configuration without fundamental restrictions on subsequent use, both stand-alone and in the form of objects (primitives) suitable for integration in a third-party model.
-
Topology-based activity recognition: stratified manifolds and separability in sensor space
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 829-850While working on activity recognition using wearable sensors for healthcare applications, the main issue arises in the classification of activities. When we attempt to classify activities like walking, sitting, or running from accelerometer and gyroscope data, the signals often overlap and noise complicates the classification process. The existing methods do not have solid mathematical foundations to handle this issue. We started with the standard magnitude approach where one can compute $m = \sqrt{a^2_1 + a^2_2 + a^2_3}$ from the accelerometer readings, but this approach failed because different activities ended up in overlapping regions. We therefore developed a different approach. Instead of collapsing the 6-dimensional sensor data into simple magnitudes, we keep all six dimensions and treat each activity as a rectangular box in this 6D space. We define these boxes using simple interval constraints. For example, walking occurs when the $x$-axis accelerometer reading is between $2$ and $4$, the $y$-axis reading is between $9$ and $10$, and so on. The key breakthrough is what we call a separability index $s = \frac{d_{\min}^{}}{\sigma}$ that determines how accurately the classification will work. Here dmin represents how far apart the activity boxes are, and $\sigma$ represents the amount of noise present. From this simple idea, we derive a mathematical formula $P(\text{error}) \leqslant (n-1)\exp\left(-\frac{s^2}8\right)$ that predicts the error rate even before initiating the experiment. We tested this on the standard UCI-HAR and WISDM datasets and achieved $86.1 %$ accuracy. The theoretical predictions matched the actual results within $3 %$. This approach outperforms the traditional magnitude methods by $30.6 %$ and explains why certain activities overlap with each other.
-
Wall functions for high-Reynolds calculations in FlowVision software
Computer Research and Modeling, 2015, v. 7, no. 6, pp. 1221-1239Views (last year): 6. Citations: 4 (RSCI).The article submits wall functions model “FlowVision”. The model allows simulating turbulent flows of fluid and gas over solid impermeable surfaces on different grids. Four turbulence models are considered: $k-\varepsilon$ FlowVision, $k-\varepsilon$ Standard, SST $k-\omega$, SA. Details of implementation of turbulence models in FlowVision software are discussed. Calculations of two test cases are demonstrated.
-
A multilayer neural network for determination of particle size distribution in Dynamic Light Scattering problem
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 265-273Views (last year): 16.Solution of Dynamic Light Scattering problem makes it possible to determine particle size distribution (PSD) from the spectrum of the intensity of scattered light. As a result of experiment, an intensity curve is obtained. The experimentally obtained spectrum of intensity is compared with the theoretically expected spectrum, which is the Lorentzian line. The main task is to determine on the basis of these data the relative concentrations of particles of each class presented in the solution. The article presents a method for constructing and using a neural network trained on synthetic data to determine PSD in a solution in the range of 1–500 nm. The neural network has a fully connected layer of 60 neurons with the RELU activation function at the output, a layer of 45 neurons and the same activation function, a dropout layer and 2 layers with 15 and 1 neurons (network output). The article describes how the network has been trained and tested on synthetic and experimental data. On the synthetic data, the standard deviation metric (rmse) gave a value of 1.3157 nm. Experimental data were obtained for particle sizes of 200 nm, 400 nm and a solution with representatives of both sizes. The results of the neural network and the classical linear methods are compared. The disadvantages of the classical methods are that it is difficult to determine the degree of regularization: too much regularization leads to the particle size distribution curves are much smoothed out, and weak regularization gives oscillating curves and low reliability of the results. The paper shows that the neural network gives a good prediction for particles with a large size. For small sizes, the prediction is worse, but the error quickly decreases as the particle size increases.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




