Результаты поиска по 'regularization':
Найдено статей: 78
  1. Gladin E.L., Borodich E.D.
    Variance reduction for minimax problems with a small dimension of one of the variables
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275

    The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.

  2. Geller O.V., Vasilev M.O., Kholodov Y.A.
    Building a high-performance computing system for simulation of gas dynamics
    Computer Research and Modeling, 2010, v. 2, no. 3, pp. 309-317

    The aim of research is to develop software system for solving gas dynamic problem in multiply connected integration domains of regular shape by high-performance computing system. Comparison of the various technologies of parallel computing has been done. The program complex is implemented using multithreaded parallel systems to organize both multi-core and massively parallel calculation. The comparison of numerical results with known model problems solutions has been done. Research of performance of different computing platforms has been done.

    Views (last year): 5. Citations: 6 (RSCI).
  3. Polyakova R.V., Yudin I.P.
    Mathematical modelling of the magnetic system by A. N. Tikhonov regularization method
    Computer Research and Modeling, 2011, v. 3, no. 2, pp. 165-175

    In this paper the problem of searching for the design of the magnetic system for creation a magnetic field with the required characteristics in the given area is solved. On the basis of analysis of the mathematical model of the magnetic system rather a general approach is proposed to the solving of the inverse problem, which is written by the Fredgolm equation H(z) = ∫SIJ(s)G(z, s)ds, z ∈ S H, s ∈ S I . It was necessary to define the current density distribution function J(s) and the existing winding geometry for creation of a required magnetic field H(z). In the paper a method of solving those by means of regularized iterative processes is proposed. On the base of the concrete magnetic system we perform the numerical study of influence of different factors on the character of the magnetic field being designed.

  4. Vrazhnov D.A., Shapovalov A.V., Nikolaev V.V.
    On quality of object tracking algorithms
    Computer Research and Modeling, 2012, v. 4, no. 2, pp. 303-313

    Object movement on a video is classified on the regular (object movement on continuous trajectory) and non-regular (trajectory breaks due to object occlusions by other objects, object jumps and others). In the case of regular object movement a tracker is considered as a dynamical system that enables to use conditions of existence, uniqueness, and stability of the dynamical system solution. This condition is used as the correctness criterion of the tracking process. Also, quantitative criterion for correct mean-shift tracking assessment based on the Lipchitz condition is suggested. Results are generalized for arbitrary tracker.

    Views (last year): 20. Citations: 9 (RSCI).
  5. Shabanov A.E., Petrov M.N., Chikitkin A.V.
    A multilayer neural network for determination of particle size distribution in Dynamic Light Scattering problem
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 265-273

    Solution of Dynamic Light Scattering problem makes it possible to determine particle size distribution (PSD) from the spectrum of the intensity of scattered light. As a result of experiment, an intensity curve is obtained. The experimentally obtained spectrum of intensity is compared with the theoretically expected spectrum, which is the Lorentzian line. The main task is to determine on the basis of these data the relative concentrations of particles of each class presented in the solution. The article presents a method for constructing and using a neural network trained on synthetic data to determine PSD in a solution in the range of 1–500 nm. The neural network has a fully connected layer of 60 neurons with the RELU activation function at the output, a layer of 45 neurons and the same activation function, a dropout layer and 2 layers with 15 and 1 neurons (network output). The article describes how the network has been trained and tested on synthetic and experimental data. On the synthetic data, the standard deviation metric (rmse) gave a value of 1.3157 nm. Experimental data were obtained for particle sizes of 200 nm, 400 nm and a solution with representatives of both sizes. The results of the neural network and the classical linear methods are compared. The disadvantages of the classical methods are that it is difficult to determine the degree of regularization: too much regularization leads to the particle size distribution curves are much smoothed out, and weak regularization gives oscillating curves and low reliability of the results. The paper shows that the neural network gives a good prediction for particles with a large size. For small sizes, the prediction is worse, but the error quickly decreases as the particle size increases.

    Views (last year): 16.
  6. Mikheyev P.V., Gorynin G.L., Borisova L.R.
    A modified model of the effect of stress concentration near a broken fiber on the tensile strength of high-strength composites (MLLS-6)
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 559-573

    The article proposes a model for assessing the potential strength of a composite material based on modern fibers with brittle fracture.

    Materials consisting of parallel cylindrical fibers that are quasi-statically stretched in one direction are simulated. It is assumed that the sample is not less than 100 pieces, which corresponds to almost significant cases. It is known that the fibers have a distribution of ultimate deformation in the sample and are not destroyed at the same moment. Usually the distribution of their properties is described by the Weibull–Gnedenko statistical distribution. To simulate the strength of the composite, a model of fiber breaks accumulation is used. It is assumed that the fibers united by the polymer matrix are crushed to twice the inefficient length — the distance at which the stresses increase from the end of the broken fiber to the middle one. However, this model greatly overestimates the strength of composites with brittle fibers. For example, carbon and glass fibers are destroyed in this way.

    In some cases, earlier attempts were made to take into account the stress concentration near the broken fiber (Hedgepest model, Ermolenko model, shear analysis), but such models either required a lot of initial data or did not coincide with the experiment. In addition, such models idealize the packing of fibers in the composite to the regular hexagonal packing.

    The model combines the shear analysis approach to stress distribution near the destroyed fiber and the statistical approach of fiber strength based on the Weibull–Gnedenko distribution, while introducing a number of assumptions that simplify the calculation without loss of accuracy.

    It is assumed that the stress concentration on the adjacent fiber increases the probability of its destruction in accordance with the Weibull distribution, and the number of such fibers with an increased probability of destruction is directly related to the number already destroyed before. All initial data can be obtained from simple experiments. It is shown that accounting for redistribution only for the nearest fibers gives an accurate forecast.

    This allowed a complete calculation of the strength of the composite. The experimental data obtained by us on carbon fibers, glass fibers and model composites based on them (CFRP, GFRP), confirm some of the conclusions of the model.

  7. Sadin D.V.
    Analysis of dissipative properties of a hybrid large-particle method for structurally complicated gas flows
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 757-772

    We study the computational properties of a parametric class of finite-volume schemes with customizable dissipative properties with splitting by physical processes into Lagrangian, Eulerian, and the final stages (the hybrid large-particle method). The method has a second-order approximation in space and time on smooth solutions. The regularization of a numerical solution at the Lagrangian stage is performed by nonlinear correction of artificial viscosity. Regardless of the grid resolution, the artificial viscosity value tends to zero outside the zone of discontinuities and extremes in the solution. At Eulerian and final stages, primitive variables (density, velocity, and total energy) are first reconstructed by an additive combination of upwind and central approximations weighted by a flux limiter. Then numerical divergent fluxes are formed from them. In this case, discrete analogs of conservation laws are performed.

    The analysis of dissipative properties of the method using known viscosity and flow limiters, as well as their linear combination, is performed. The resolution of the scheme and the quality of numerical solutions are demonstrated by examples of two-dimensional benchmarks: a gas flow around the step with Mach numbers 3, 10 and 20, the double Mach reflection of a strong shock wave, and the implosion problem. The influence of the scheme viscosity of the method on the numerical reproduction of a gases interface instability is studied. It is found that a decrease of the dissipation level in the implosion problem leads to the symmetric solution destruction and formation of a chaotic instability on the contact surface.

    Numerical solutions are compared with the results of other authors obtained using higher-order approximation schemes: CABARET, HLLC (Harten Lax van Leer Contact), CFLFh (CFLF hybrid scheme), JT (centered scheme with limiter by Jiang and Tadmor), PPM (Piecewise Parabolic Method), WENO5 (weighted essentially non-oscillatory scheme), RKGD (Runge –Kutta Discontinuous Galerkin), hybrid weighted nonlinear schemes CCSSR-HW4 and CCSSR-HW6. The advantages of the hybrid large-particle method include extended possibilities for solving hyperbolic and mixed types of problems, a good ratio of dissipative and dispersive properties, a combination of algorithmic simplicity and high resolution in problems with complex shock-wave structure, both instability and vortex formation at interfaces.

  8. Grachev V.A., Nayshtut Yu.S.
    Variational principle for shape memory solids under variable external forces and temperatures
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 541-555

    The quasistatic deformation problem for shape memory alloys is reviewed within the phenomenological mechanics of solids without microphysics analysis. The phenomenological approach is based on comparison of two material deformation diagrams. The first diagram corresponds to the active proportional loading when the alloy behaves as an ideal elastoplastic material; the residual strain is observed after unloading. The second diagram is relevant to the case when the deformed sample is heated to a certain temperature for each alloy. The initial shape is restored: the reverse distortion matches deformations on the first diagram, except for the sign. Because the first step of distortion can be described with the variational principle, for which the existence of the generalized solutions is proved under arbitrary loading, it becomes clear how to explain the reverse distortion within the slightly modified theory of plasticity. The simply connected surface of loading needs to be replaced with the doubly connected one, and the variational principle needs to be updated with two laws of thermodynamics and the principle of orthogonality for thermodynamic forces and streams. In this case it is not difficult to prove the existence of solutions either. The successful application of the theory of plasticity under the constant temperature causes the need to obtain a similar result for a more general case of variable external forces and temperatures. The paper studies the ideal elastoplastic von Mises model at linear strain rates. Taking into account hardening and arbitrary loading surface does not cause any additional difficulties.

    The extended variational principle of the Reissner type is defined. Together with the laws of thermal plasticity it enables to prove the existence of the generalized solutions for three-dimensional bodies made of shape memory materials. The main issue to resolve is a challenge to choose a functional space for the rates and deformations of the continuum points. The space of bounded deformation, which is the main instrument of the mathematical theory of plasticity, serves this purpose in the paper. The proving process shows that the choice of the functional spaces used in the paper is not the only one. The study of other possible problem settings for the extended variational principle and search for regularity of generalized solutions seem an interesting challenge for future research.

  9. Litvinov V.N., Chistyakov A.E., Nikitina A.V., Atayan A.M., Kuznetsova I.Y.
    Mathematical modeling of hydrodynamics problems of the Azov Sea on a multiprocessor computer system
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 647-672

    The article is devoted to modeling the shallow water hydrodynamic processes using the example of the Azov Sea. The article presents a mathematical model of the hydrodynamics of a shallow water body, which allows one to calculate three-dimensional fields of the velocity vector of movement of the aquatic environment. Application of regularizers according to B.N.Chetverushkin in the continuity equation led to a change in the method of calculating the pressure field, based on solving the wave equation. A discrete finite-difference scheme has been constructed for calculating pressure in an area whose linear vertical dimensions are significantly smaller than those in horizontal coordinate directions, which is typical for the geometry of shallow water bodies. The method and algorithm for solving grid equations with a tridiagonal preconditioner are described. The proposed method is used to solve grid equations that arise when calculating pressure for the three-dimensional problem of hydrodynamics of the Azov Sea. It is shown that the proposed method converges faster than the modified alternating triangular method. A parallel implementation of the proposed method for solving grid equations is presented and theoretical and practical estimates of the acceleration of the algorithm are carried out taking into account the latency time of the computing system. The results of computational experiments for solving problems of hydrodynamics of the Sea of Azov using the hybrid MPI + OpenMP technology are presented. The developed models and algorithms were used to reconstruct the environmental disaster that occurred in the Sea of Azov in 2001 and to solve the problem of the movement of the aquatic environment in estuary areas. Numerical experiments were carried out on the K-60 hybrid computing cluster of the Keldysh Institute of Applied Mathematics of Russian Academy of Sciences.

  10. Doludenko A.N., Kulikov Y.M., Saveliev A.S.
    Сhaotic flow evolution arising in a body force field
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 883-912

    This article presents the results of an analytical and computer study of the chaotic evolution of a regular velocity field generated by a large-scale harmonic forcing. The authors obtained an analytical solution for the flow stream function and its derivative quantities (velocity, vorticity, kinetic energy, enstrophy and palinstrophy). Numerical modeling of the flow evolution was carried out using the OpenFOAM software package based on incompressible model, as well as two inhouse implementations of CABARET and McCormack methods employing nearly incompressible formulation. Calculations were carried out on a sequence of nested meshes with 642, 1282, 2562, 5122, 10242 cells for two characteristic (asymptotic) Reynolds numbers characterizing laminar and turbulent evolution of the flow, respectively. Simulations show that blow-up of the analytical solution takes place in both cases. The energy characteristics of the flow are discussed relying upon the energy curves as well as the dissipation rates. For the fine mesh, this quantity turns out to be several orders of magnitude less than its hydrodynamic (viscous) counterpart. Destruction of the regular flow structure is observed for any of the numerical methods, including at the late stages of laminar evolution, when numerically obtained distributions are close to analytics. It can be assumed that the prerequisite for the development of instability is the error accumulated during the calculation process. This error leads to unevenness in the distribution of vorticity and, as a consequence, to the variance vortex intensity and finally leads to chaotization of the flow. To study the processes of vorticity production, we used two integral vorticity-based quantities — integral enstrophy ($\zeta$) and palinstrophy $(P)$. The formulation of the problem with periodic boundary conditions allows us to establish a simple connection between these quantities. In addition, $\zeta$ can act as a measure of the eddy resolution of the numerical method, and palinstrophy determines the degree of production of small-scale vorticity.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"