Результаты поиска по 'error function':
Найдено статей: 42
  1. Kashchenko N.M., Ishanov S.A., Zinin L.V., Matsievsky S.V.
    A numerical method for solving two-dimensional convection equation based on the monotonized Z-scheme for Earth ionosphere simulation
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 43-58

    The purpose of the paper is a research of a 2nd order finite difference scheme based on the Z-scheme. This research is the numerical solution of several two-dimensional differential equations simulated the incompressible medium convection.

    One of real tasks for similar equations solution is the numerical simulating of strongly non-stationary midscale processes in the Earth ionosphere. Because convection processes in ionospheric plasma are controlled by magnetic field, the plasma incompressibility condition is supposed across the magnetic field. For the same reason, there can be rather high velocities of heat and mass convection along the magnetic field.

    Ionospheric simulation relevant task is the research of plasma instability of various scales which started in polar and equatorial regions first of all. At the same time the mid-scale irregularities having characteristic sizes 1–50 km create conditions for development of the small-scale instabilities. The last lead to the F-spread phenomenon which significantly influences the accuracy of positioning satellite systems work and also other space and ground-based radio-electronic systems.

    The difference schemes used for simultaneous simulating of such multi-scale processes must to have high resolution. Besides, these difference schemes must to be high resolution on the one hand and monotonic on the other hand. The fact that instabilities strengthen errors of difference schemes, especially they strengthen errors of dispersion type is the reason of such contradictory requirements. The similar swing of errors usually results to nonphysical results at the numerical solution.

    At the numerical solution of three-dimensional mathematical models of ionospheric plasma are used the following scheme of splitting on physical processes: the first step of splitting carries out convection along, the second step of splitting carries out convection across. The 2nd order finite difference scheme investigated in the paper solves approximately convection across equations. This scheme is constructed by a monotonized nonlinear procedure on base of the Z-scheme which is one of 2nd order schemes. At this monotonized procedure a nonlinear correction with so-called “oblique differences” is used. “Oblique differences” contain the grid nodes relating to different layers of time.

    The researches were conducted for two cases. In the simulating field components of the convection vector had: 1) the constant sign; 2) the variable sign. Dissipative and dispersive characteristics of the scheme for different types of the limiting functions are in number received.

    The results of the numerical experiments allow to draw the following conclusions.

    1. For the discontinuous initial profile the best properties were shown by the SuperBee limiter.

    2. For the continuous initial profile with the big spatial steps the SuperBee limiter is better, and at the small steps the Koren limiter is better.

    3. For the smooth initial profile the best results were shown by the Koren limiter.

    4. The smooth F limiter showed the results similar to Koren limiter.

    5. Limiters of different type leave dispersive errors, at the same time dependences of dispersive errors on the scheme parameters have big variability and depend on the scheme parameters difficulty.

    6. The monotony of the considered differential scheme is in number confirmed in all calculations. The property of variation non-increase for all specified functions limiters is in number confirmed for the onedimensional equation.

    7. The constructed differential scheme at the steps on time which are not exceeding the Courant's step is monotonous and shows good exactness characteristics for different types solutions. At excess of the Courant's step the scheme remains steady, but becomes unsuitable for instability problems as monotony conditions not satisfied in this case.

  2. Vostrikov D.D., Konin G.O., Lobanov A.V., Matyukhin V.V.
    Influence of the mantissa finiteness on the accuracy of gradient-free optimization methods
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 259-280

    Gradient-free optimization methods or zeroth-order methods are widely used in training neural networks, reinforcement learning, as well as in industrial tasks where only the values of a function at a point are available (working with non-analytical functions). In particular, the method of error back propagation in PyTorch works exactly on this principle. There is a well-known fact that computer calculations use heuristics of floating-point numbers, and because of this, the problem of finiteness of the mantissa arises.

    In this paper, firstly, we reviewed the most popular methods of gradient approximation: Finite forward/central difference (FFD/FCD), Forward/Central wise component (FWC/CWC), Forward/Central randomization on $l_2$ sphere (FSSG2/CFFG2); secondly, we described current theoretical representations of the noise introduced by the inaccuracy of calculating the function at a point: adversarial noise, random noise; thirdly, we conducted a series of experiments on frequently encountered classes of problems, such as quadratic problem, logistic regression, SVM, to try to determine whether the real nature of machine noise corresponds to the existing theory. It turned out that in reality (at least for those classes of problems that were considered in this paper), machine noise turned out to be something between adversarial noise and random, and therefore the current theory about the influence of the mantissa limb on the search for the optimum in gradient-free optimization problems requires some adjustment.

  3. Zabello K.K., Garbaruk A.V.
    Investigation of the accuracy of the lattice Boltzmann method in calculating acoustic wave propagation
    Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1069-1081

    The article presents a systematic investigation of the capabilities of the lattice Boltzmann method (LBM) for modeling the propagation of acoustic waves. The study considers the problem of wave propagation from a point harmonic source in an unbounded domain, both in a quiescent medium (Mach number $M=0$) and in the presence of a uniform mean flow ($M=0.2$). Both scenarios admit analytical solutions within the framework of linear acoustics, allowing for a quantitative assessment of the accuracy of the numerical method.

    The numerical implementation employs the two-dimensional D2Q9 velocity model and the Bhatnagar – Gross – Krook (BGK) collision operator. The oscillatory source is modeled using Gou’s scheme, while spurious high-order moment noise generated by the source is suppressed via a regularization procedure applied to the distribution functions. To minimize wave reflections from the boundaries of the computational domain, a hybrid approach is used, combining characteristic boundary conditions based on Riemann invariants with perfectly matched layers (PML) featuring a parabolic damping profile.

    A detailed analysis is conducted to assess the influence of computational parameters on the accuracy of the method. The dependence of the error on the PML thickness ($L_{\text{PML}}^{}$) and the maximum damping coefficient ($\sigma_{\max}^{}$), the dimensionless source amplitude ($Q'_0$), and the grid resolution is thoroughly examined. The results demonstrate that the LBM is suitable for simulating acoustic wave propagation and exhibits second-order accuracy. It is shown that achieving high accuracy (relative pressure error below $1\,\%$) requires a spatial resolution of at least $20$ grid points per wavelength ($\lambda$). The minimal effective PML parameters ensuring negligible boundary reflections are identified as $\sigma_{\max}^{}\geqslant 0.02$ and $L_{\text{PML}}^{} \geqslant 2\lambda$. Additionally, it is shown that for source amplitudes $Q_0' \geqslant 0.1$, nonlinear effects become significant compared to other sources of error.

  4. Emaletdinova L.Y., Mukhametzyanov Z.I., Kataseva D.V., Kabirova A.N.
    A method of constructing a predictive neural network model of a time series
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 737-756

    This article studies a method of constructing a predictive neural network model of a time series based on determining the composition of input variables, constructing a training sample and training itself using the back propagation method. Traditional methods of constructing predictive models of the time series are: the autoregressive model, the moving average model or the autoregressive model — the moving average allows us to approximate the time series by a linear dependence of the current value of the output variable on a number of its previous values. Such a limitation as linearity of dependence leads to significant errors in forecasting.

    Mining Technologies using neural network modeling make it possible to approximate the time series by a nonlinear dependence. Moreover, the process of constructing of a neural network model (determining the composition of input variables, the number of layers and the number of neurons in the layers, choosing the activation functions of neurons, determining the optimal values of the neuron link weights) allows us to obtain a predictive model in the form of an analytical nonlinear dependence.

    The determination of the composition of input variables of neural network models is one of the key points in the construction of neural network models in various application areas that affect its adequacy. The composition of the input variables is traditionally selected from some physical considerations or by the selection method. In this work it is proposed to use the behavior of the autocorrelation and private autocorrelation functions for the task of determining the composition of the input variables of the predictive neural network model of the time series.

    In this work is proposed a method for determining the composition of input variables of neural network models for stationary and non-stationary time series, based on the construction and analysis of autocorrelation functions. Based on the proposed method in the Python programming environment are developed an algorithm and a program, determining the composition of the input variables of the predictive neural network model — the perceptron, as well as building the model itself. The proposed method was experimentally tested using the example of constructing a predictive neural network model of a time series that reflects energy consumption in different regions of the United States, openly published by PJM Interconnection LLC (PJM) — a regional network organization in the United States. This time series is non-stationary and is characterized by the presence of both a trend and seasonality. Prediction of the next values of the time series based on previous values and the constructed neural network model showed high approximation accuracy, which proves the effectiveness of the proposed method.

  5. The currently performed mathematical and computer modeling of thermal processes in technical systems is based on an assumption that all the parameters determining thermal processes are fully and unambiguously known and identified (i.e., determined). Meanwhile, experience has shown that parameters determining the thermal processes are of undefined interval-stochastic character, which in turn is responsible for the intervalstochastic nature of thermal processes in the electronic system. This means that the actual temperature values of each element in an technical system will be randomly distributed within their variation intervals. Therefore, the determinative approach to modeling of thermal processes that yields specific values of element temperatures does not allow one to adequately calculate temperature distribution in electronic systems. The interval-stochastic nature of the parameters determining the thermal processes depends on three groups of factors: (a) statistical technological variation of parameters of the elements when manufacturing and assembling the system; (b) the random nature of the factors caused by functioning of an technical system (fluctuations in current and voltage; power, temperatures, and flow rates of the cooling fluid and the medium inside the system); and (c) the randomness of ambient parameters (temperature, pressure, and flow rate). The interval-stochastic indeterminacy of the determinative factors in technical systems is irremediable; neglecting it causes errors when designing electronic systems. A method that allows modeling of unsteady interval-stochastic thermal processes in technical systems (including those upon interval indeterminacy of the determinative parameters) is developed in this paper. The method is based on obtaining and further solving equations for the unsteady statistical measures (mathematical expectations, variances and covariances) of the temperature distribution in an technical system at given variation intervals and the statistical measures of the determinative parameters. Application of the elaborated method to modeling of the interval-stochastic thermal process in a particular electronic system is considered.

    Views (last year): 15. Citations: 6 (RSCI).
  6. Shumixin A.G., Aleksandrova A.S.
    Identification of a controlled object using frequency responses obtained from a dynamic neural network model of a control system
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 729-740

    We present results of a study aimed at identification of a controlled object’s channels based on postprocessing of measurements with development of a model of a multiple-input controlled object and subsequent active modelling experiment. The controlled object model is developed using approximation of its behavior by a neural network model using trends obtained during a passive experiment in the mode of normal operation. Recurrent neural network containing feedback elements allows to simulate behavior of dynamic objects; input and feedback time delays allow to simulate behavior of inertial objects with pure delay. The model was taught using examples of the object’s operation with a control system and is presented by a dynamic neural network and a model of a regulator with a known regulation function. The neural network model simulates the system’s behavior and is used to conduct active computing experiments. Neural network model allows to obtain the controlled object’s response to an exploratory stimulus, including a periodic one. The obtained complex frequency response is used to evaluate parameters of the object’s transfer system using the least squares method. We present an example of identification of a channel of the simulated control system. The simulated object has two input ports and one output port and varying transport delays in transfer channels. One of the input ports serves as a controlling stimulus, the second is a controlled perturbation. The controlled output value changes as a result of control stimulus produced by the regulator operating according to the proportional-integral regulation law based on deviation of the controlled value from the task. The obtained parameters of the object’s channels’ transfer functions are close to the parameters of the input simulated object. The obtained normalized error of the reaction for a single step-wise stimulus of the control system model developed based on identification of the simulated control system doesn’t exceed 0.08. The considered objects pertain to the class of technological processes with continuous production. Such objects are characteristic of chemical, metallurgic, mine-mill, pulp and paper, and other industries.

    Views (last year): 10.
  7. Shabanov A.E., Petrov M.N., Chikitkin A.V.
    A multilayer neural network for determination of particle size distribution in Dynamic Light Scattering problem
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 265-273

    Solution of Dynamic Light Scattering problem makes it possible to determine particle size distribution (PSD) from the spectrum of the intensity of scattered light. As a result of experiment, an intensity curve is obtained. The experimentally obtained spectrum of intensity is compared with the theoretically expected spectrum, which is the Lorentzian line. The main task is to determine on the basis of these data the relative concentrations of particles of each class presented in the solution. The article presents a method for constructing and using a neural network trained on synthetic data to determine PSD in a solution in the range of 1–500 nm. The neural network has a fully connected layer of 60 neurons with the RELU activation function at the output, a layer of 45 neurons and the same activation function, a dropout layer and 2 layers with 15 and 1 neurons (network output). The article describes how the network has been trained and tested on synthetic and experimental data. On the synthetic data, the standard deviation metric (rmse) gave a value of 1.3157 nm. Experimental data were obtained for particle sizes of 200 nm, 400 nm and a solution with representatives of both sizes. The results of the neural network and the classical linear methods are compared. The disadvantages of the classical methods are that it is difficult to determine the degree of regularization: too much regularization leads to the particle size distribution curves are much smoothed out, and weak regularization gives oscillating curves and low reliability of the results. The paper shows that the neural network gives a good prediction for particles with a large size. For small sizes, the prediction is worse, but the error quickly decreases as the particle size increases.

    Views (last year): 16.
  8. Doludenko A.N., Kulikov Y.M., Saveliev A.S.
    Сhaotic flow evolution arising in a body force field
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 883-912

    This article presents the results of an analytical and computer study of the chaotic evolution of a regular velocity field generated by a large-scale harmonic forcing. The authors obtained an analytical solution for the flow stream function and its derivative quantities (velocity, vorticity, kinetic energy, enstrophy and palinstrophy). Numerical modeling of the flow evolution was carried out using the OpenFOAM software package based on incompressible model, as well as two inhouse implementations of CABARET and McCormack methods employing nearly incompressible formulation. Calculations were carried out on a sequence of nested meshes with 642, 1282, 2562, 5122, 10242 cells for two characteristic (asymptotic) Reynolds numbers characterizing laminar and turbulent evolution of the flow, respectively. Simulations show that blow-up of the analytical solution takes place in both cases. The energy characteristics of the flow are discussed relying upon the energy curves as well as the dissipation rates. For the fine mesh, this quantity turns out to be several orders of magnitude less than its hydrodynamic (viscous) counterpart. Destruction of the regular flow structure is observed for any of the numerical methods, including at the late stages of laminar evolution, when numerically obtained distributions are close to analytics. It can be assumed that the prerequisite for the development of instability is the error accumulated during the calculation process. This error leads to unevenness in the distribution of vorticity and, as a consequence, to the variance vortex intensity and finally leads to chaotization of the flow. To study the processes of vorticity production, we used two integral vorticity-based quantities — integral enstrophy ($\zeta$) and palinstrophy $(P)$. The formulation of the problem with periodic boundary conditions allows us to establish a simple connection between these quantities. In addition, $\zeta$ can act as a measure of the eddy resolution of the numerical method, and palinstrophy determines the degree of production of small-scale vorticity.

  9. Safaryan O.A.
    Determining the characteristics of a random process by comparing them with values based on models of distribution laws
    Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1105-1118

    The effectiveness of communication and data transmission systems (CSiPS), which are an integral part of modern systems in almost any field of science and technology, largely depends on the stability of the frequency of the generated signals. The signals generated in the CSiPD can be considered as processes, the frequency of which changes under the influence of a combination of external influences. Changing the frequency of the signals leads to a decrease in the signal-tonoise ratio (SNR) and, consequently, a deterioration in the characteristics of the signal-to-noise ratio, such as the probability of a bit error and bandwidth. It is most convenient to consider the description of such changes in the frequency of signals as random processes, the apparatus of which is widely used in the construction of mathematical models describing the functioning of systems and devices in various fields of science and technology. Moreover, in many cases, the characteristics of a random process, such as the distribution law, mathematical expectation, and variance, may be unknown or known with errors that do not allow us to obtain estimates of the signal parameters that are acceptable in accuracy. The article proposes an algorithm for solving the problem of determining the characteristics of a random process (signal frequency) based on a set of samples of its frequency, allowing to determine the sample mean, sample variance and the distribution law of frequency deviations in the general population. The basis of this algorithm is the comparison of the values of the observed random process measured over a certain time interval with a set of the same number of random values formed on the basis of model distribution laws. Distribution laws based on mathematical models of these systems and devices or corresponding to similar systems and devices can be considered as model distribution laws. When forming a set of random values for the accepted model distribution law, the sample mean value and variance obtained from the measurement results of the observed random process are used as mathematical expectation and variance. The feature of the algorithm is to compare the measured values of the observed random process ordered in ascending or descending order and the generated sets of values in accordance with the accepted models of distribution laws. The results of mathematical modeling illustrating the application of this algorithm are presented.

  10. Bogomolov S.V.
    Stochastic formalization of the gas dynamic hierarchy
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 767-779

    Mathematical models of gas dynamics and its computational industry, in our opinion, are far from perfect. We will look at this problem from the point of view of a clear probabilistic micro-model of a gas from hard spheres, relying on both the theory of random processes and the classical kinetic theory in terms of densities of distribution functions in phase space, namely, we will first construct a system of nonlinear stochastic differential equations (SDE), and then a generalized random and nonrandom integro-differential Boltzmann equation taking into account correlations and fluctuations. The key feature of the initial model is the random nature of the intensity of the jump measure and its dependence on the process itself.

    Briefly recall the transition to increasingly coarse meso-macro approximations in accordance with a decrease in the dimensionalization parameter, the Knudsen number. We obtain stochastic and non-random equations, first in phase space (meso-model in terms of the Wiener — measure SDE and the Kolmogorov – Fokker – Planck equations), and then — in coordinate space (macro-equations that differ from the Navier – Stokes system of equations and quasi-gas dynamics systems). The main difference of this derivation is a more accurate averaging by velocity due to the analytical solution of stochastic differential equations with respect to the Wiener measure, in the form of which an intermediate meso-model in phase space is presented. This approach differs significantly from the traditional one, which uses not the random process itself, but its distribution function. The emphasis is placed on the transparency of assumptions during the transition from one level of detail to another, and not on numerical experiments, which contain additional approximation errors.

    The theoretical power of the microscopic representation of macroscopic phenomena is also important as an ideological support for particle methods alternative to difference and finite element methods.

Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"