All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Difference scheme for solving problems of hydrodynamics for large grid Peclet numbers
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 833-848The paper discusses the development and application of the accounting rectangular cell fullness method with material substance, in particular, a liquid, to increase the smoothness and accuracy of a finite-difference solution of hydrodynamic problems with a complex shape of the boundary surface. Two problems of computational hydrodynamics are considered to study the possibilities of the proposed difference schemes: the spatial-twodimensional flow of a viscous fluid between two coaxial semi-cylinders and the transfer of substances between coaxial semi-cylinders. Discretization of diffusion and convection operators was performed on the basis of the integro-interpolation method, taking into account taking into account the fullness of cells and without it. It is proposed to use a difference scheme, for solving the problem of diffusion – convection at large grid Peclet numbers, that takes into account the cell population function, and a scheme on the basis of linear combination of the Upwind and Standard Leapfrog difference schemes with weight coefficients obtained by minimizing the approximation error at small Courant numbers. As a reference, an analytical solution describing the Couette – Taylor flow is used to estimate the accuracy of the numerical solution. The relative error of calculations reaches 70% in the case of the direct use of rectangular grids (stepwise approximation of the boundaries), under the same conditions using the proposed method allows to reduce the error to 6%. It is shown that the fragmentation of a rectangular grid by 2–8 times in each of the spatial directions does not lead to the same increase in the accuracy that numerical solutions have, obtained taking into account the fullness of the cells. The proposed difference schemes on the basis of linear combination of the Upwind and Standard Leapfrog difference schemes with weighting factors of 2/3 and 1/3, respectively, obtained by minimizing the order of approximation error, for the diffusion – convection problem have a lower grid viscosity and, as a corollary, more precisely, describe the behavior of the solution in the case of large grid Peclet numbers.
-
Ellipsoid method for convex stochastic optimization in small dimension
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1137-1147The article considers minimization of the expectation of convex function. Problems of this type often arise in machine learning and a variety of other applications. In practice, stochastic gradient descent (SGD) and similar procedures are usually used to solve such problems. We propose to use the ellipsoid method with mini-batching, which converges linearly and can be more efficient than SGD for a class of problems. This is verified by our experiments, which are publicly available. The algorithm does not require neither smoothness nor strong convexity of the objective to achieve linear convergence. Thus, its complexity does not depend on the conditional number of the problem. We prove that the method arrives at an approximate solution with given probability when using mini-batches of size proportional to the desired accuracy to the power −2. This enables efficient parallel execution of the algorithm, whereas possibilities for batch parallelization of SGD are rather limited. Despite fast convergence, ellipsoid method can result in a greater total number of calls to oracle than SGD, which works decently with small batches. Complexity is quadratic in dimension of the problem, hence the method is suitable for relatively small dimensionalities.
-
Variance reduction for minimax problems with a small dimension of one of the variables
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.
-
A multilayer neural network for determination of particle size distribution in Dynamic Light Scattering problem
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 265-273Views (last year): 16.Solution of Dynamic Light Scattering problem makes it possible to determine particle size distribution (PSD) from the spectrum of the intensity of scattered light. As a result of experiment, an intensity curve is obtained. The experimentally obtained spectrum of intensity is compared with the theoretically expected spectrum, which is the Lorentzian line. The main task is to determine on the basis of these data the relative concentrations of particles of each class presented in the solution. The article presents a method for constructing and using a neural network trained on synthetic data to determine PSD in a solution in the range of 1–500 nm. The neural network has a fully connected layer of 60 neurons with the RELU activation function at the output, a layer of 45 neurons and the same activation function, a dropout layer and 2 layers with 15 and 1 neurons (network output). The article describes how the network has been trained and tested on synthetic and experimental data. On the synthetic data, the standard deviation metric (rmse) gave a value of 1.3157 nm. Experimental data were obtained for particle sizes of 200 nm, 400 nm and a solution with representatives of both sizes. The results of the neural network and the classical linear methods are compared. The disadvantages of the classical methods are that it is difficult to determine the degree of regularization: too much regularization leads to the particle size distribution curves are much smoothed out, and weak regularization gives oscillating curves and low reliability of the results. The paper shows that the neural network gives a good prediction for particles with a large size. For small sizes, the prediction is worse, but the error quickly decreases as the particle size increases.
-
The model sound speed determination for the plane shear fluid flow problem solving by the SPH method
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 339-351The problem discrete statement by the smoothed particle hydrodynamics method (SPH) include a discretization constants parameters set. Of them particular note is the model sound speed $c_0$, which relates the SPH-particle instantaneous density to the resulting pressure through the equation of state.
The paper describes an approach to the exact determination of the model sound speed required value. It is on the analysis based, how SPH-particle density changes with their relative shift. An example of the continuous medium motion taken the plane shear flow problem; the analysis object is the relative compaction function $\varepsilon_\rho$ in the SPH-particle. For various smoothing kernels was research the functions of $\varepsilon_\rho$, that allowed the pulsating nature of the pressures occurrence in particles to establish. Also the neighbors uniform distribution in the smoothing domain was determined, at which shaping the maximum of compaction in the particle.
Through comparison the function $\varepsilon_\rho$ with the SPH-approximation of motion equation is defined associate the discretization parameter $c_0$ with the smoothing kernel shape and other problem parameters. As a result, an equation is formulated that the necessary and sufficient model sound speed value provides finding. For such equation the expressions of root $c_0$ are given for three different smoothing kernels, that simplified from polynomials to numerical coefficients for the plane shear flow problem parameters.
-
High-Reynolds number calculations of turbulent heat transfer in FlowVision software
Computer Research and Modeling, 2018, v. 10, no. 4, pp. 461-481Views (last year): 23.This work presents the model of heat wall functions FlowVision (WFFV), which allows simulation of nonisothermal flows of fluid and gas near solid surfaces on relatively coarse grids with use of turbulence models. The work follows the research on the development of wall functions applicable in wide range of the values of quantity y+. Model WFFV assumes smooth profiles of the tangential component of velocity, turbulent viscosity, temperature, and turbulent heat conductivity near a solid surface. Possibility of using a simple algebraic model for calculation of variable turbulent Prandtl number is investigated in this study (the turbulent Prandtl number enters model WFFV as parameter). The results are satisfactory. The details of implementation of model WFFV in the FlowVision software are explained. In particular, the boundary condition for the energy equation used in high-Reynolds number calculations of non-isothermal flows is considered. The boundary condition is deduced for the energy equation written via thermodynamic enthalpy and via full enthalpy. The capability of the model is demonstrated on two test problems: flow of incompressible fluid past a plate and supersonic flow of gas past a plate (M = 3).
Analysis of literature shows that there exists essential ambiguity in experimental data and, as a consequence, in empirical correlations for the Stanton number (that being a dimensionless heat flux). The calculations suggest that the default values of the model parameters, automatically specified in the program, allow calculations of heat fluxes at extended solid surfaces with engineering accuracy. At the same time, it is obvious that one cannot invent universal wall functions. For this reason, the controls of model WFFV are made accessible from the FlowVision interface. When it is necessary, a user can tune the model for simulation of the required type of flow.
The proposed model of wall functions is compatible with all the turbulence models implemented in the FlowVision software: the algebraic model of Smagorinsky, the Spalart-Allmaras model, the SST $k-\omega$ model, the standard $k-\varepsilon$ model, the $k-\varepsilon$ model of Abe, Kondoh, Nagano, the quadratic $k-\varepsilon$ model, and $k-\varepsilon$ model FlowVision.
-
The two geometric parameters influence study on the hydrostatic problem solution accuracy by the SPH method
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 979-992The two significant geometric parameters are proposed that affect the physical quantities interpolation in the smoothed particle hydrodynamics method (SPH). They are: the smoothing coefficient which the particle size and the smoothing radius are connecting and the volume coefficient which determine correctly the particle mass for a given particles distribution in the medium.
In paper proposes a technique for these parameters influence assessing on the SPH method interpolations accuracy when the hydrostatic problem solving. The analytical functions of the relative error for the density and pressure gradient in the medium are introduced for the accuracy estimate. The relative error functions are dependent on the smoothing factor and the volume factor. Designating a specific interpolation form in SPH method allows the differential form of the relative error functions to the algebraic polynomial form converting. The root of this polynomial gives the smoothing coefficient values that provide the minimum interpolation error for an assigned volume coefficient.
In this work, the derivation and analysis of density and pressure gradient relative errors functions on a sample of popular nuclei with different smoothing radius was carried out. There is no common the smoothing coefficient value for all the considered kernels that provides the minimum error for both SPH interpolations. The nuclei representatives with different smoothing radius are identified which make it possible the smallest errors of SPH interpolations to provide when the hydrostatic problem solving. As well, certain kernels with different smoothing radius was determined which correct interpolation do not allow provide when the hydrostatic problem solving by the SPH method.
-
Analogues of the relative strong convexity condition for relatively smooth problems and adaptive gradient-type methods
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 413-432This paper is devoted to some variants of improving the convergence rate guarantees of the gradient-type algorithms for relatively smooth and relatively Lipschitz-continuous problems in the case of additional information about some analogues of the strong convexity of the objective function. We consider two classes of problems, namely, convex problems with a relative functional growth condition, and problems (generally, non-convex) with an analogue of the Polyak – Lojasiewicz gradient dominance condition with respect to Bregman divergence. For the first type of problems, we propose two restart schemes for the gradient type methods and justify theoretical estimates of the convergence of two algorithms with adaptively chosen parameters corresponding to the relative smoothness or Lipschitz property of the objective function. The first of these algorithms is simpler in terms of the stopping criterion from the iteration, but for this algorithm, the near-optimal computational guarantees are justified only on the class of relatively Lipschitz-continuous problems. The restart procedure of another algorithm, in its turn, allowed us to obtain more universal theoretical results. We proved a near-optimal estimate of the complexity on the class of convex relatively Lipschitz continuous problems with a functional growth condition. We also obtained linear convergence rate guarantees on the class of relatively smooth problems with a functional growth condition. For a class of problems with an analogue of the gradient dominance condition with respect to the Bregman divergence, estimates of the quality of the output solution were obtained using adaptively selected parameters. We also present the results of some computational experiments illustrating the performance of the methods for the second approach at the conclusion of the paper. As examples, we considered a linear inverse Poisson problem (minimizing the Kullback – Leibler divergence), its regularized version which allows guaranteeing a relative strong convexity of the objective function, as well as an example of a relatively smooth and relatively strongly convex problem. In particular, calculations show that a relatively strongly convex function may not satisfy the relative variant of the gradient dominance condition.
-
Adaptive first-order methods for relatively strongly convex optimization problems
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 445-472The article is devoted to first-order adaptive methods for optimization problems with relatively strongly convex functionals. The concept of relatively strong convexity significantly extends the classical concept of convexity by replacing the Euclidean norm in the definition by the distance in a more general sense (more precisely, by Bregman’s divergence). An important feature of the considered classes of problems is the reduced requirements concerting the level of smoothness of objective functionals. More precisely, we consider relatively smooth and relatively Lipschitz-continuous objective functionals, which allows us to apply the proposed techniques for solving many applied problems, such as the intersection of the ellipsoids problem (IEP), the Support Vector Machine (SVM) for a binary classification problem, etc. If the objective functional is convex, the condition of relatively strong convexity can be satisfied using the problem regularization. In this work, we propose adaptive gradient-type methods for optimization problems with relatively strongly convex and relatively Lipschitzcontinuous functionals for the first time. Further, we propose universal methods for relatively strongly convex optimization problems. This technique is based on introducing an artificial inaccuracy into the optimization model, so the proposed methods can be applied both to the case of relatively smooth and relatively Lipschitz-continuous functionals. Additionally, we demonstrate the optimality of the proposed universal gradient-type methods up to the multiplication by a constant for both classes of relatively strongly convex problems. Also, we show how to apply the technique of restarts of the mirror descent algorithm to solve relatively Lipschitz-continuous optimization problems. Moreover, we prove the optimal estimate of the rate of convergence of such a technique. Also, we present the results of numerical experiments to compare the performance of the proposed methods.
-
Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 473-495Non-smooth optimization often arises in many applied problems. The issues of developing efficient computational procedures for such problems in high-dimensional spaces are very topical. First-order methods (subgradient methods) are well applicable here, but in fairly general situations they lead to low speed guarantees for large-scale problems. One of the approaches to this type of problem can be to identify a subclass of non-smooth problems that allow relatively optimistic results on the rate of convergence. For example, one of the options for additional assumptions can be the condition of a sharp minimum, proposed in the late 1960s by B. T. Polyak. In the case of the availability of information about the minimal value of the function for Lipschitz-continuous problems with a sharp minimum, it turned out to be possible to propose a subgradient method with a Polyak step-size, which guarantees a linear rate of convergence in the argument. This approach made it possible to cover a number of important applied problems (for example, the problem of projecting onto a convex compact set). However, both the condition of the availability of the minimal value of the function and the condition of a sharp minimum itself look rather restrictive. In this regard, in this paper, we propose a generalized condition for a sharp minimum, somewhat similar to the inexact oracle proposed recently by Devolder – Glineur – Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover, the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex nonsmooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak $\beta$-quasi-convexity and ordinary quasiconvexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the $\delta$-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence onto the feasible set of the problem.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"