All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Reduction of decision rule of multivariate interpolation and approximation method in the problem of data classification
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 475-484Views (last year): 5.This article explores a method of machine learning based on the theory of random functions. One of the main problems of this method is that decision rule of a model becomes more complicated as the number of training dataset examples increases. The decision rule of the model is the most probable realization of a random function and it's represented as a polynomial with the number of terms equal to the number of training examples. In this article we will show the quick way of the number of training dataset examples reduction and, accordingly, the complexity of the decision rule. Reducing the number of examples of training dataset is due to the search and removal of weak elements that have little effect on the final form of the decision function, and noise sampling elements. For each $(x_i,y_i)$-th element sample was introduced the concept of value, which is expressed by the deviation of the estimated value of the decision function of the model at the point $x_i$, built without the $i$-th element, from the true value $y_i$. Also we show the possibility of indirect using weak elements in the process of training model without increasing the number of terms in the decision function. At the experimental part of the article, we show how changed amount of data affects to the ability of the method of generalizing in the classification task.
-
Estimation of natural frequencies of torsional vibrations of a composite nonlinearly viscoelastic shaft
Computer Research and Modeling, 2018, v. 10, no. 4, pp. 421-430Views (last year): 27.The article presents a method for linearization the effective function of material instantaneous deformation in order to generalize the torsional vibration equation to the case of nonlinearly deformable rheologically active shafts. It is considered layered and structurally heterogeneous, on average isotropic shafts made of nonlinearly viscoelastic components. The technique consists in determining the approximate shear modulus by minimizing the root-mean-square deviation in approximation of the effective diagram of instantaneous deformation.
The method allows to estimate analytically values of natural frequencies of layered and structurally heterogeneous nonlinearly viscoelastic shaft. This makes it possible to significantly reduce resources in vibration analysis, as well as to track changes in values of natural frequencies with changing geometric, physico-mechanical and structural parameters of shafts, which is especially important at the initial stages of modeling and design. In addition, the paper shows that only a pronounced nonlinearity of the effective state equation has an effect on the natural frequencies, and in some cases the nonlinearity in determining the natural frequencies can be neglected.
As equations of state of the composite material components, the article considers the equations of nonlinear heredity with instantaneous deformation functions in the form of the Prandtl’s bilinear diagrams. To homogenize the state equations of layered shafts, it is applied the Voigt’s hypothesis on the homogeneity of deformations and the Reuss’ hypothesis on the homogeneity of stresses in the volume of a composite body. Using these assumptions, effective secant and tangential shear moduli, proportionality limits, as well as creep and relaxation kernels of longitudinal, axial and transversely layered shafts are obtained. In addition, it is obtained the indicated effective characteristics of a structurally heterogeneous, on average isotropic shaft using the homogenization method previously proposed by the authors, based on the determination of the material deformation parameters by the rule of a mixture for the Voigt’s and the Reuss’ state equations.
-
Development of anisotropic nonlinear noise-reduction algorithm for computed tomography data with context dynamic threshold
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 233-248Views (last year): 21.The article deals with the development of the noise-reduction algorithm based on anisotropic nonlinear data filtering of computed tomography (CT). Analysis of domestic and foreign literature has shown that the most effective algorithms for noise reduction of CT data use complex methods for analyzing and processing data, such as bilateral, adaptive, three-dimensional and other types of filtrations. However, a combination of such techniques is rarely used in practice due to long processing time per slice. In this regard, it was decided to develop an efficient and fast algorithm for noise-reduction based on simplified bilateral filtration method with three-dimensional data accumulation. The algorithm was developed on C ++11 programming language in Microsoft Visual Studio 2015. The main difference of the developed noise reduction algorithm is the use an improved mathematical model of CT noise, based on the distribution of Poisson and Gauss from the logarithmic value, developed earlier by our team. This allows a more accurate determination of the noise level and, thus, the threshold of data processing. As the result of the noise reduction algorithm, processed CT data with lower noise level were obtained. Visual evaluation of the data showed the increased information content of the processed data, compared to original data, the clarity of the mapping of homogeneous regions, and a significant reduction in noise in processing areas. Assessing the numerical results of the algorithm showed a decrease in the standard deviation (SD) level by more than 6 times in the processed areas, and high rates of the determination coefficient showed that the data were not distorted and changed only due to the removal of noise. Usage of newly developed context dynamic threshold made it possible to decrease SD level on every area of data. The main difference of the developed threshold is its simplicity and speed, achieved by preliminary estimation of the data array and derivation of the threshold values that are put in correspondence with each pixel of the CT. The principle of its work is based on threshold criteria, which fits well both into the developed noise reduction algorithm based on anisotropic nonlinear filtration, and another algorithm of noise-reduction. The algorithm successfully functions as part of the MultiVox workstation and is being prepared for implementation in a single radiological network of the city of Moscow.
-
Difference scheme for solving problems of hydrodynamics for large grid Peclet numbers
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 833-848The paper discusses the development and application of the accounting rectangular cell fullness method with material substance, in particular, a liquid, to increase the smoothness and accuracy of a finite-difference solution of hydrodynamic problems with a complex shape of the boundary surface. Two problems of computational hydrodynamics are considered to study the possibilities of the proposed difference schemes: the spatial-twodimensional flow of a viscous fluid between two coaxial semi-cylinders and the transfer of substances between coaxial semi-cylinders. Discretization of diffusion and convection operators was performed on the basis of the integro-interpolation method, taking into account taking into account the fullness of cells and without it. It is proposed to use a difference scheme, for solving the problem of diffusion – convection at large grid Peclet numbers, that takes into account the cell population function, and a scheme on the basis of linear combination of the Upwind and Standard Leapfrog difference schemes with weight coefficients obtained by minimizing the approximation error at small Courant numbers. As a reference, an analytical solution describing the Couette – Taylor flow is used to estimate the accuracy of the numerical solution. The relative error of calculations reaches 70% in the case of the direct use of rectangular grids (stepwise approximation of the boundaries), under the same conditions using the proposed method allows to reduce the error to 6%. It is shown that the fragmentation of a rectangular grid by 2–8 times in each of the spatial directions does not lead to the same increase in the accuracy that numerical solutions have, obtained taking into account the fullness of the cells. The proposed difference schemes on the basis of linear combination of the Upwind and Standard Leapfrog difference schemes with weighting factors of 2/3 and 1/3, respectively, obtained by minimizing the order of approximation error, for the diffusion – convection problem have a lower grid viscosity and, as a corollary, more precisely, describe the behavior of the solution in the case of large grid Peclet numbers.
-
Mirror descent for constrained optimization problems with large subgradient values of functional constraints
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 301-317The paper is devoted to the problem of minimization of the non-smooth functional $f$ with a non-positive non-smooth Lipschitz-continuous functional constraint. We consider the formulation of the problem in the case of quasi-convex functionals. We propose new strategies of step-sizes and adaptive stopping rules in Mirror Descent for the considered class of problems. It is shown that the methods are applicable to the objective functionals of various levels of smoothness. Applying a special restart technique to the considered version of Mirror Descent there was proposed an optimal method for optimization problems with strongly convex objective functionals. Estimates of the rate of convergence for the considered methods are obtained depending on the level of smoothness of the objective functional. These estimates indicate the optimality of the considered methods from the point of view of the theory of lower oracle bounds. In particular, the optimality of our approach for Höldercontinuous quasi-convex (sub)differentiable objective functionals is proved. In addition, the case of a quasiconvex objective functional and functional constraint was considered. In this paper, we consider the problem of minimizing a non-smooth functional $f$ in the presence of a Lipschitz-continuous non-positive non-smooth functional constraint $g$, and the problem statement in the cases of quasi-convex and strongly (quasi-)convex functionals is considered separately. The paper presents numerical experiments demonstrating the advantages of using the considered methods.
-
A difference method for solving the convection–diffusion equation with a nonclassical boundary condition in a multidimensional domain
Computer Research and Modeling, 2022, v. 14, no. 3, pp. 559-579The paper studies a multidimensional convection-diffusion equation with variable coefficients and a nonclassical boundary condition. Two cases are considered: in the first case, the first boundary condition contains the integral of the unknown function with respect to the integration variable $x_\alpha^{}$, and in the second case, the integral of the unknown function with respect to the integration variable $\tau$, denoting the memory effect. Similar problems arise when studying the transport of impurities along the riverbed. For an approximate solution of the problem posed, a locally one-dimensional difference scheme by A.A. Samarskii with order of approximation $O(h^2+\tau)$. In view of the fact that the equation contains the first derivative of the unknown function with respect to the spatial variable $x_\alpha^{}$, the wellknown method proposed by A.A. Samarskii in constructing a monotonic scheme of the second order of accuracy in $h_\alpha^{}$ for a general parabolic type equation containing one-sided derivatives taking into account the sign of $r_\alpha^{}(x,t)$. To increase the boundary conditions of the third kind to the second order of accuracy in $h_\alpha^{}$, we used the equation, on the assumption that it is also valid at the boundaries. The study of the uniqueness and stability of the solution was carried out using the method of energy inequalities. A priori estimates are obtained for the solution of the difference problem in the $L_2^{}$-norm, which implies the uniqueness of the solution, the continuous and uniform dependence of the solution of the difference problem on the input data, and the convergence of the solution of the locally onedimensional difference scheme to the solution of the original differential problem in the $L_2^{}$-norm with speed equal to the order of approximation of the difference scheme. For a two-dimensional problem, a numerical solution algorithm is constructed.
-
Parametric identification of dynamic systems based on external interval estimates of phase variables
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 299-314An important role in the construction of mathematical models of dynamic systems is played by inverse problems, which in particular include the problem of parametric identification. Unlike classical models that operate with point values, interval models give upper and lower boundaries on the quantities under study. The paper considers an interpolation approach to solving interval problems of parametric identification of dynamic systems for the case when experimental data are represented by external interval estimates. The purpose of the proposed approach is to find such an interval estimate of the model parameters, in which the external interval estimate of the solution of the direct modeling problem would contain experimental data or minimize the deviation from them. The approach is based on the adaptive interpolation algorithm for modeling dynamic systems with interval uncertainties, which makes it possible to explicitly obtain the dependence of phase variables on system parameters. The task of minimizing the distance between the experimental data and the model solution in the space of interval boundaries of the model parameters is formulated. An expression for the gradient of the objectivet function is obtained. On a representative set of tasks, the effectiveness of the proposed approach is demonstrated.
-
Model of mantle convection in a zone of a complete subduction cycle
Computer Research and Modeling, 2024, v. 16, no. 6, pp. 1385-1398A 2D numerical model of the immersion of a cold oceanic plate into the thickness of the Earth’s upper mantle has been developed, where the stage of the initial immersion of the plate is preceded by the establishment of a regime of thermogravitational convection of the mantle substance. The model approximation of the mantle is a two-dimensional image of an incompressible Newtonian quasi-liquid in a Cartesian coordinate system, where, due to the high viscosity of the medium, the equations of mantle convection are accepted in the Stokes approximation. It is assumed that seawater that has leaked here enters the first horizons of the mantle together with the plate. With depth, the increase in pressure and temperature leads to certain losses of its light fractions and fluids, losses of water and gases of water-containing minerals of the plate, restructuring of their crystal lattice and, as a consequence, phase transformations. These losses cause an increase in the plate density and an uneven distribution of stresses along the plate (the initial sections of the plate are denser), which subsequently, together with the effect of mantle currents on the plate, causes its fragmentation. The state of mantle convection is considered when the plate and its individual fragments have descended to the bottom of the upper mantle. Computational schemes for solving the model equations have been developed. Mantle convection calculations are performed in terms of the Stokes approximation for vorticity and the stream function, and SPH is used to calculate the state and subsidence of the plate. A number of computational experiments have been performed. It is shown that fragmentation of the plate occurs due to the effect of mantle convection on the plate and the development of inhomogeneous stress fields along the plate. Following the equations of the model, the time of the final stage of subduction is estimated, i.e. the time of the entire oceanic plate reaching the bottom of the upper mantle. In geodynamics, this process is determined by the collision of plates that immediately follows subduction and is usually considered as the final stage of the Wilson cycle (i. e., the cycle of development of folded belts).
-
The influence of solar flares on the release of seismic energy
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 567-581The influence of solar activity on various processes on Earth has long been the subject of close study, which resulted in the appearance of the term “space weather”. The most striking manifestation of solar activity are the so-called “solar flares”, which are explosive releases of energy in the solar atmosphere, resulting in a flow of photons and charged particles reaching the Earth with a slight delay. After two or three days, a plasma flow reaches the Earth. Thus, a solar flare is an event stretched out in time for several days. The impact of solar flares on human health and the technosphere is a popular subject for discussion and scientific research. This article provides a quantitative assessment of the trigger effect of solar flares on the release of energy as a result of seismic events. The article provides an estimate in the form of a “percentage” of the released seismic energy of the trigger effect of solar flares on the release of seismic energy worldwide and in 8 areas of the Pacific Fire Ring. The initial data are a time series of solar flares from July 31, 1996 to the end of 2024. The time points of the greatest local extremes of solar flare intensity and released seismic energy were studied in successive time intervals of 1 day. For each pair of time sequences in sliding time windows, the “lead measures” of each time sequence relative to the other were estimated using a parametric model of the intensity of interacting point processes. The difference between the “direct” lead measure of the time points of local extremes of solar flare intensity relative to the moments of maximum released seismic energy and the “reverse” lead measure was calculated. The average value of the difference in lead measures provides an estimate of the share of the intensity of seismic events for which solar flares are a trigger.
-
The stable estimation of intensity of atmospheric pollution source on the base of sequential function specification method
Computer Research and Modeling, 2009, v. 1, no. 4, pp. 391-403The approach given in this work helps to organize the operative control over action intensity of pollution emissions in atmosphere. The approach allows to sequential estimate of unknown intensity of atmospheric pollution source on the base of concentration measurements of impurity in several stationary control points is offered in the work. The inverse problem was solved by means of the step-by-step regularization and the sequential function specification method. The solution is presented in the form of the digital filter in terms of Hamming. The fitting algorithm of regularization parameter r for function specification method is described.
Keywords: atmospheric pollution, digital filter.Views (last year): 2.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




