Результаты поиска по 'regularization':
Найдено статей: 83
  1. Khokhlov N.I., Stetsyuk V.O., Mitskovets I.A.
    Overset grids approach for topography modeling in elastic-wave modeling using the grid-characteristic method
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1049-1059

    While modeling seismic wave propagation, it is important to take into account nontrivial topography, as this topography causes multiple complex phenomena, such as diffraction at rough surfaces, complex propagation of Rayleigh waves, and side effects caused by wave interference. The primary goal of this research is to construct a method that implements the free surface on topography, utilizing an overset curved grid for characterization, while keeping the main grid structured rectangular. For a combination of the regular and curve-linear grid, the workability of the grid characteristics method using overset grids (also known as the Chimera grid approach) is analyzed. One of the benefits of this approach is computational complexity reduction, caused by the fact that simulation in a regular, homogeneous physical area using a sparse regular rectangle grid is simpler. The simplification of the mesh building mechanism (one grid is regular, and the other can be automatically built using surface data) is a side effect. Despite its simplicity, the method we propose allows us to increase the digitalization of fractured regions and minimize the Courant number. This paper contains various comparisons of modeling results produced by the proposed method-based solver, and results produced by the well-known solver specfem2d, as well as previous modeling results for the same problems. The drawback of the method is that an interpolation error can worsen an overall model accuracy and reduce the computational schema order. Some countermeasures against it are described. For this paper, only two-dimensional models are analyzed. However, the method we propose can be applied to the three-dimensional problems with minimal adaptation required.

  2. Yudin N.E.
    Modified Gauss–Newton method for solving a smooth system of nonlinear equations
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 697-723

    In this paper, we introduce a new version of Gauss–Newton method for solving a system of nonlinear equations based on ideas of the residual upper bound for a system of nonlinear equations and a quadratic regularization term. The introduced Gauss–Newton method in practice virtually forms the whole parameterized family of the methods solving systems of nonlinear equations and regression problems. The developed family of Gauss–Newton methods completely consists of iterative methods with generalization for cases of non-euclidean normed spaces, including special forms of Levenberg–Marquardt algorithms. The developed methods use the local model based on a parameterized proximal mapping allowing us to use an inexact oracle of «black–box» form with restrictions for the computational precision and computational complexity. We perform an efficiency analysis including global and local convergence for the developed family of methods with an arbitrary oracle in terms of iteration complexity, precision and complexity of both local model and oracle, problem dimensionality. We present global sublinear convergence rates for methods of the proposed family for solving a system of nonlinear equations, consisting of Lipschitz smooth functions. We prove local superlinear convergence under extra natural non-degeneracy assumptions for system of nonlinear functions. We prove both local and global linear convergence for a system of nonlinear equations under Polyak–Lojasiewicz condition for proposed Gauss– Newton methods. Besides theoretical justifications of methods we also consider practical implementation issues. In particular, for conducted experiments we present effective computational schemes for the exact oracle regarding to the dimensionality of a problem. The proposed family of methods unites several existing and frequent in practice Gauss–Newton method modifications, allowing us to construct a flexible and convenient method implementable using standard convex optimization and computational linear algebra techniques.

  3. Gladin E.L., Borodich E.D.
    Variance reduction for minimax problems with a small dimension of one of the variables
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275

    The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.

  4. Konyukhov A.V., Rostilov T.A.
    Numerical simulation of converging spherical shock waves with symmetry violation
    Computer Research and Modeling, 2025, v. 17, no. 1, pp. 59-71

    The study of the development of π-periodic perturbations of a converging spherical shock wave leading to cumulation limitation is performed. The study is based on 3D hydrodynamic calculations with the Carnahan – Starling equation of state for hard sphere fluid. The method of solving the Euler equations on moving (compressing) grids allows one to trace the evolution of the converging shock wave front with high accuracy in a wide range of its radius. The compression rate of the computational grid is adapted to the motion of the shock wave front, while the motion of the boundaries of the computational domain satisfy the condition of its supersonic velocity relative to the medium. This leads to the fact that the solution is determined only by the initial data at the grid compression stage. The second order TVD scheme is used to reconstruct the vector of conservative variables at the boundaries of the computational cells in combination with the Rusanov scheme for calculating the numerical vector of flows. The choice is due to a strong tendency for the manifestation of carbuncle-type numerical instability in the calculations, which is known for other classes of flows. In the three-dimensional case of the observed force, the carbuncle effect was obtained for the first time, which is explained by the specific nature of the flow: the concavity of the shock wave front in the direction of motion, the unlimited (in the symmetric case) growth of the Mach number, and the stationarity of the front on the computational grid. The applied numerical method made it possible to study the detailed flow pattern on the scale of cumulation termination, which is impossible within the framework of the Whitham method of geometric shock wave dynamics, which was previously used to calculate converging shock waves. The study showed that the limitation of cumulation is associated with the transition from the Mach interaction of converging shock wave segments to a regular one due to the progressive increase in the ratio of the azimuthal velocity at the shock wave front to the radial velocity with a decrease in its radius. It was found that this ratio is represented as a product of a limited oscillating function of the radius and a power function of the radius with an exponent depending on the initial packing density in the hard sphere model. It is shown that increasing the packing density parameter in the hard sphere model leads to a significant increase in the pressures achieved in a shock wave with broken symmetry. For the first time in the calculation, it is shown that at the scale of cumulation termination, the flow is accompanied by the formation of high-energy vortices, which involve the substance that has undergone the greatest shock-wave compression. Influencing heat and mass transfer in the region of greatest compression, this circumstance is important for current practical applications of converging shock waves for the purpose of initiating reactions (detonation, phase transitions, controlled thermonuclear fusion).

  5. Geller O.V., Vasilev M.O., Kholodov Y.A.
    Building a high-performance computing system for simulation of gas dynamics
    Computer Research and Modeling, 2010, v. 2, no. 3, pp. 309-317

    The aim of research is to develop software system for solving gas dynamic problem in multiply connected integration domains of regular shape by high-performance computing system. Comparison of the various technologies of parallel computing has been done. The program complex is implemented using multithreaded parallel systems to organize both multi-core and massively parallel calculation. The comparison of numerical results with known model problems solutions has been done. Research of performance of different computing platforms has been done.

    Views (last year): 5. Citations: 6 (RSCI).
  6. Polyakova R.V., Yudin I.P.
    Mathematical modelling of the magnetic system by A. N. Tikhonov regularization method
    Computer Research and Modeling, 2011, v. 3, no. 2, pp. 165-175

    In this paper the problem of searching for the design of the magnetic system for creation a magnetic field with the required characteristics in the given area is solved. On the basis of analysis of the mathematical model of the magnetic system rather a general approach is proposed to the solving of the inverse problem, which is written by the Fredgolm equation H(z) = ∫SIJ(s)G(z, s)ds, z ∈ S H, s ∈ S I . It was necessary to define the current density distribution function J(s) and the existing winding geometry for creation of a required magnetic field H(z). In the paper a method of solving those by means of regularized iterative processes is proposed. On the base of the concrete magnetic system we perform the numerical study of influence of different factors on the character of the magnetic field being designed.

  7. Vrazhnov D.A., Shapovalov A.V., Nikolaev V.V.
    On quality of object tracking algorithms
    Computer Research and Modeling, 2012, v. 4, no. 2, pp. 303-313

    Object movement on a video is classified on the regular (object movement on continuous trajectory) and non-regular (trajectory breaks due to object occlusions by other objects, object jumps and others). In the case of regular object movement a tracker is considered as a dynamical system that enables to use conditions of existence, uniqueness, and stability of the dynamical system solution. This condition is used as the correctness criterion of the tracking process. Also, quantitative criterion for correct mean-shift tracking assessment based on the Lipchitz condition is suggested. Results are generalized for arbitrary tracker.

    Views (last year): 20. Citations: 9 (RSCI).
  8. Shabanov A.E., Petrov M.N., Chikitkin A.V.
    A multilayer neural network for determination of particle size distribution in Dynamic Light Scattering problem
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 265-273

    Solution of Dynamic Light Scattering problem makes it possible to determine particle size distribution (PSD) from the spectrum of the intensity of scattered light. As a result of experiment, an intensity curve is obtained. The experimentally obtained spectrum of intensity is compared with the theoretically expected spectrum, which is the Lorentzian line. The main task is to determine on the basis of these data the relative concentrations of particles of each class presented in the solution. The article presents a method for constructing and using a neural network trained on synthetic data to determine PSD in a solution in the range of 1–500 nm. The neural network has a fully connected layer of 60 neurons with the RELU activation function at the output, a layer of 45 neurons and the same activation function, a dropout layer and 2 layers with 15 and 1 neurons (network output). The article describes how the network has been trained and tested on synthetic and experimental data. On the synthetic data, the standard deviation metric (rmse) gave a value of 1.3157 nm. Experimental data were obtained for particle sizes of 200 nm, 400 nm and a solution with representatives of both sizes. The results of the neural network and the classical linear methods are compared. The disadvantages of the classical methods are that it is difficult to determine the degree of regularization: too much regularization leads to the particle size distribution curves are much smoothed out, and weak regularization gives oscillating curves and low reliability of the results. The paper shows that the neural network gives a good prediction for particles with a large size. For small sizes, the prediction is worse, but the error quickly decreases as the particle size increases.

    Views (last year): 16.
  9. Mikheyev P.V., Gorynin G.L., Borisova L.R.
    A modified model of the effect of stress concentration near a broken fiber on the tensile strength of high-strength composites (MLLS-6)
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 559-573

    The article proposes a model for assessing the potential strength of a composite material based on modern fibers with brittle fracture.

    Materials consisting of parallel cylindrical fibers that are quasi-statically stretched in one direction are simulated. It is assumed that the sample is not less than 100 pieces, which corresponds to almost significant cases. It is known that the fibers have a distribution of ultimate deformation in the sample and are not destroyed at the same moment. Usually the distribution of their properties is described by the Weibull–Gnedenko statistical distribution. To simulate the strength of the composite, a model of fiber breaks accumulation is used. It is assumed that the fibers united by the polymer matrix are crushed to twice the inefficient length — the distance at which the stresses increase from the end of the broken fiber to the middle one. However, this model greatly overestimates the strength of composites with brittle fibers. For example, carbon and glass fibers are destroyed in this way.

    In some cases, earlier attempts were made to take into account the stress concentration near the broken fiber (Hedgepest model, Ermolenko model, shear analysis), but such models either required a lot of initial data or did not coincide with the experiment. In addition, such models idealize the packing of fibers in the composite to the regular hexagonal packing.

    The model combines the shear analysis approach to stress distribution near the destroyed fiber and the statistical approach of fiber strength based on the Weibull–Gnedenko distribution, while introducing a number of assumptions that simplify the calculation without loss of accuracy.

    It is assumed that the stress concentration on the adjacent fiber increases the probability of its destruction in accordance with the Weibull distribution, and the number of such fibers with an increased probability of destruction is directly related to the number already destroyed before. All initial data can be obtained from simple experiments. It is shown that accounting for redistribution only for the nearest fibers gives an accurate forecast.

    This allowed a complete calculation of the strength of the composite. The experimental data obtained by us on carbon fibers, glass fibers and model composites based on them (CFRP, GFRP), confirm some of the conclusions of the model.

  10. Sadin D.V.
    Analysis of dissipative properties of a hybrid large-particle method for structurally complicated gas flows
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 757-772

    We study the computational properties of a parametric class of finite-volume schemes with customizable dissipative properties with splitting by physical processes into Lagrangian, Eulerian, and the final stages (the hybrid large-particle method). The method has a second-order approximation in space and time on smooth solutions. The regularization of a numerical solution at the Lagrangian stage is performed by nonlinear correction of artificial viscosity. Regardless of the grid resolution, the artificial viscosity value tends to zero outside the zone of discontinuities and extremes in the solution. At Eulerian and final stages, primitive variables (density, velocity, and total energy) are first reconstructed by an additive combination of upwind and central approximations weighted by a flux limiter. Then numerical divergent fluxes are formed from them. In this case, discrete analogs of conservation laws are performed.

    The analysis of dissipative properties of the method using known viscosity and flow limiters, as well as their linear combination, is performed. The resolution of the scheme and the quality of numerical solutions are demonstrated by examples of two-dimensional benchmarks: a gas flow around the step with Mach numbers 3, 10 and 20, the double Mach reflection of a strong shock wave, and the implosion problem. The influence of the scheme viscosity of the method on the numerical reproduction of a gases interface instability is studied. It is found that a decrease of the dissipation level in the implosion problem leads to the symmetric solution destruction and formation of a chaotic instability on the contact surface.

    Numerical solutions are compared with the results of other authors obtained using higher-order approximation schemes: CABARET, HLLC (Harten Lax van Leer Contact), CFLFh (CFLF hybrid scheme), JT (centered scheme with limiter by Jiang and Tadmor), PPM (Piecewise Parabolic Method), WENO5 (weighted essentially non-oscillatory scheme), RKGD (Runge –Kutta Discontinuous Galerkin), hybrid weighted nonlinear schemes CCSSR-HW4 and CCSSR-HW6. The advantages of the hybrid large-particle method include extended possibilities for solving hyperbolic and mixed types of problems, a good ratio of dissipative and dispersive properties, a combination of algorithmic simplicity and high resolution in problems with complex shock-wave structure, both instability and vortex formation at interfaces.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"