Результаты поиска по 'probability':
Найдено статей: 66
  1. Yakovleva T.V.
    Statistical distribution of the quasi-harmonic signal’s phase: basics of theory and computer simulation
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 287-297

    The paper presents the results of the fundamental research directed on the theoretical study and computer simulation of peculiarities of the quasi-harmonic signal’s phase statistical distribution. The quasi-harmonic signal is known to be formed as a result of the Gaussian noise impact on the initially harmonic signal. By means of the mathematical analysis the formulas have been obtained in explicit form for the principle characteristics of this distribution, namely: for the cumulative distribution function, the probability density function, the likelihood function. As a result of the conducted computer simulation the dependencies of these functions on the phase distribution parameters have been analyzed. The paper elaborates the methods of estimating the phase distribution parameters which contain the information about the initial, undistorted signal. It has been substantiated that the task of estimating the initial value of the phase of quasi-harmonic signal can be efficiently solved by averaging the results of the sampled measurements. As for solving the task of estimating the second parameter of the phase distribution, namely — the parameter, determining the signal level respectively the noise level — a maximum likelihood technique is proposed to be applied. The graphical illustrations are presented that have been obtained by means of the computer simulation of the principle characteristics of the phase distribution under the study. The existence and uniqueness of the likelihood function’s maximum allow substantiating the possibility and the efficiency of solving the task of estimating signal’s level relative to noise level by means of the maximum likelihood technique. The elaborated method of estimating the un-noised signal’s level relative to noise, i. e. the parameter characterizing the signal’s intensity on the basis of measurements of the signal’s phase is an original and principally new technique which opens perspectives of usage of the phase measurements as a tool of the stochastic data analysis. The presented investigation is meaningful for solving the task of determining the phase and the signal’s level by means of the statistical processing of the sampled phase measurements. The proposed methods of the estimation of the phase distribution’s parameters can be used at solving various scientific and technological tasks, in particular, in such areas as radio-physics, optics, radiolocation, radio-navigation, metrology.

  2. Koganov A.V., Sazonov A.N.
    Critical rate of computing net increase for providing the infinity faultless work
    Computer Research and Modeling, 2009, v. 1, no. 1, pp. 33-39

    Fault-tolerance of a finite computing net with arbitrary graph, containing elements with certain probability of fault and restore, is analyzed. Algorithm for net growth at each work cycle is suggested. It is shown that if the rate of net increase is sufficiently big then the probability of infinity faultless work is positive. Estimated critical net increase rate is logarithmic over the number of work cycles.

  3. This article explores a method of machine learning based on the theory of random functions. One of the main problems of this method is that decision rule of a model becomes more complicated as the number of training dataset examples increases. The decision rule of the model is the most probable realization of a random function and it's represented as a polynomial with the number of terms equal to the number of training examples. In this article we will show the quick way of the number of training dataset examples reduction and, accordingly, the complexity of the decision rule. Reducing the number of examples of training dataset is due to the search and removal of weak elements that have little effect on the final form of the decision function, and noise sampling elements. For each $(x_i,y_i)$-th element sample was introduced the concept of value, which is expressed by the deviation of the estimated value of the decision function of the model at the point $x_i$, built without the $i$-th element, from the true value $y_i$. Also we show the possibility of indirect using weak elements in the process of training model without increasing the number of terms in the decision function. At the experimental part of the article, we show how changed amount of data affects to the ability of the method of generalizing in the classification task.

    Views (last year): 5.
  4. Kuznetsov M.B.
    Investigation of Turing structures formation under the influence of wave instability
    Computer Research and Modeling, 2019, v. 11, no. 3, pp. 397-412

    A classical for nonlinear dynamics model, Brusselator, is considered, being augmented by addition of a third variable, which plays the role of a fast-diffusing inhibitor. The model is investigated in one-dimensional case in the parametric domain, where two types of diffusive instabilities of system’s homogeneous stationary state are manifested: wave instability, which leads to spontaneous formation of autowaves, and Turing instability, which leads to spontaneous formation of stationary dissipative structures, or Turing structures. It is shown that, due to the subcritical nature of Turing bifurcation, the interaction of two instabilities in this system results in spontaneous formation of stationary dissipative structures already before the passage of Turing bifurcation. In response to different perturbations of spatially uniform stationary state, different stable regimes are manifested in the vicinity of the double bifurcation point in the parametric region under study: both pure regimes, which consist of either stationary or autowave dissipative structures; and mixed regimes, in which different modes dominate in different areas of the computational space. In the considered region of the parametric space, the system is multistable and exhibits high sensitivity to initial noise conditions, which leads to blurring of the boundaries between qualitatively different regimes in the parametric region. At that, even in the area of dominance of mixed modes with prevalence of Turing structures, the establishment of a pure autowave regime has significant probability. In the case of stable mixed regimes, a sufficiently strong local perturbation in the area of the computational space, where autowave mode is manifested, can initiate local formation of new stationary dissipative structures. Local perturbation of the stationary homogeneous state in the parametric region under investidation leads to a qualitatively similar map of established modes, the zone of dominance of pure autowave regimes being expanded with the increase of local perturbation amplitude. In two-dimensional case, mixed regimes turn out to be only transient — upon the appearance of localized Turing structures under the influence of wave regime, they eventually occupy all available space.

    Views (last year): 21.
  5. Kozhevnikov V.S., Matyushkin I.V., Chernyaev N.V.
    Analysis of the basic equation of the physical and statistical approach within reliability theory of technical systems
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 721-735

    Verification of the physical-statistical approach within reliability theory for the simplest cases was carried out, which showed its validity. An analytical solution of the one-dimensional basic equation of the physicalstatistical approach is presented under the assumption of a stationary degradation rate. From a mathematical point of view this equation is the well-known continuity equation, where the role of density is played by the density distribution function of goods in its characteristics phase space, and the role of fluid velocity is played by intensity (rate) degradation processes. The latter connects the general formalism with the specifics of degradation mechanisms. The cases of coordinate constant, linear and quadratic degradation rates are analyzed using the characteristics method. In the first two cases, the results correspond to physical intuition. At a constant rate of degradation, the shape of the initial distribution is preserved, and the distribution itself moves equably from the zero. At a linear rate of degradation, the distribution either narrows down to a narrow peak (in the singular limit), or expands, with the maximum shifting to the periphery at an exponentially increasing rate. The distribution form is also saved up to the parameters. For the initial normal distribution, the coordinates of the largest value of the distribution maximum for its return motion are obtained analytically.

    In the quadratic case, the formal solution demonstrates counterintuitive behavior. It consists in the fact that the solution is uniquely defined only on a part of an infinite half-plane, vanishes along with all derivatives on the boundary, and is ambiguous when crossing the boundary. If you continue it to another area in accordance with the analytical solution, it has a two-humped appearance, retains the amount of substance and, which is devoid of physical meaning, periodically over time. If you continue it with zero, then the conservativeness property is violated. The anomaly of the quadratic case is explained, though not strictly, by the analogy of the motion of a material point with an acceleration proportional to the square of velocity. Here we are dealing with a mathematical curiosity. Numerical calculations are given for all cases. Additionally, the entropy of the probability distribution and the reliability function are calculated, and their correlation is traced.

  6. We consider a model of spontaneous formation of a computational structure in the human brain for solving a given class of tasks in the process of performing a series of similar tasks. The model is based on a special definition of a numerical measure of the complexity of the solution algorithm. This measure has an informational property: the complexity of a computational structure consisting of two independent structures is equal to the sum of the complexities of these structures. Then the probability of spontaneous occurrence of the structure depends exponentially on the complexity of the structure. The exponential coefficient requires experimental determination for each type of problem. It may depend on the form of presentation of the source data and the procedure for issuing the result. This estimation method was applied to the results of a series of experiments that determined the strategy for solving a series of similar problems with a growing number of initial data. These experiments were described in previously published papers. Two main strategies were considered: sequential execution of the computational algorithm, or the use of parallel computing in those tasks where it is effective. These strategies differ in how calculations are performed. Using an estimate of the complexity of schemes, you can use the empirical probability of one of the strategies to calculate the probability of the other. The calculations performed showed a good match between the calculated and empirical probabilities. This confirms the hypothesis about the spontaneous formation of structures that solve the problem during the initial training of a person. The paper contains a brief description of experiments, detailed computational schemes and a strict definition of the complexity measure of computational structures and the conclusion of the dependence of the probability of structure formation on its complexity.

  7. Gladin E.L., Zainullina K.E.
    Ellipsoid method for convex stochastic optimization in small dimension
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1137-1147

    The article considers minimization of the expectation of convex function. Problems of this type often arise in machine learning and a variety of other applications. In practice, stochastic gradient descent (SGD) and similar procedures are usually used to solve such problems. We propose to use the ellipsoid method with mini-batching, which converges linearly and can be more efficient than SGD for a class of problems. This is verified by our experiments, which are publicly available. The algorithm does not require neither smoothness nor strong convexity of the objective to achieve linear convergence. Thus, its complexity does not depend on the conditional number of the problem. We prove that the method arrives at an approximate solution with given probability when using mini-batches of size proportional to the desired accuracy to the power −2. This enables efficient parallel execution of the algorithm, whereas possibilities for batch parallelization of SGD are rather limited. Despite fast convergence, ellipsoid method can result in a greater total number of calls to oracle than SGD, which works decently with small batches. Complexity is quadratic in dimension of the problem, hence the method is suitable for relatively small dimensionalities.

  8. Grigorieva A.V., Maksimenko M.V.
    Method for processing acoustic emission testing data to define signal velocity and location
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1029-1040

    Non-destructive acoustic emission testing is an effective and cost-efficient way to examine pressure vessels for hidden defects (cracks, laminations etc.), as well as the only method that is sensitive to developing defects. The sound velocity in the test object and its adequate definition in the location scheme are of paramount importance for the accurate detection of the acoustic emission source. The acoustic emission data processing method proposed herein comprises a set of numerical methods and allows defining the source coordinates and the most probable velocity for each signal. The method includes pre-filtering of data by amplitude, by time differences, elimination of electromagnetic interference. Further, a set of numerical methods is applied to them to solve the system of nonlinear equations, in particular, the Newton – Kantorovich method and the general iterative process. The velocity of a signal from one source is assumed as a constant in all directions. As the initial approximation is taken the center of gravity of the triangle formed by the first three sensors that registered the signal. The method developed has an important practical application, and the paper provides an example of its approbation in the calibration of an acoustic emission system at a production facility (hydrocarbon gas purification absorber). Criteria for prefiltering of data are described. The obtained locations are in good agreement with the signal generation sources, and the velocities even reflect the Rayleigh-Lamb division of acoustic waves due to the different signal source distances from the sensors. The article contains the dependency graph of the average signal velocity against the distance from its source to the nearest sensor. The main advantage of the method developed is its ability to detect the location of different velocity signals within a single test. This allows to increase the degree of freedom in the calculations, and thereby increase their accuracy.

  9. Kurushina S.E., Shapovalova E.A.
    Origin and growth of the disorder within an ordered state of the spatially extended chemical reaction model
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 595-607

    We now review the main points of mean-field approximation (MFA) in its application to multicomponent stochastic reaction-diffusion systems.

    We present the chemical reaction model under study — brusselator. We write the kinetic equations of reaction supplementing them with terms that describe the diffusion of the intermediate components and the fluctuations of the concentrations of the initial products. We simulate the fluctuations as random Gaussian homogeneous and spatially isotropic fields with zero means and spatial correlation functions with a non-trivial structure. The model parameter values correspond to a spatially-inhomogeneous ordered state in the deterministic case.

    In the MFA we derive single-site two-dimensional nonlinear self-consistent Fokker–Planck equation in the Stratonovich's interpretation for spatially extended stochastic brusselator, which describes the dynamics of probability distribution density of component concentration values of the system under consideration. We find the noise intensity values appropriate to two types of Fokker–Planck equation solutions: solution with transient bimodality and solution with the multiple alternation of unimodal and bimodal types of probability density. We study numerically the probability density dynamics and time behavior of variances, expectations, and most probable values of component concentrations at various noise intensity values and the bifurcation parameter in the specified region of the problem parameters.

    Beginning from some value of external noise intensity inside the ordered phase disorder originates existing for a finite time, and the higher the noise level, the longer this disorder “embryo” lives. The farther away from the bifurcation point, the lower the noise that generates it and the narrower the range of noise intensity values at which the system evolves to the ordered, but already a new statistically steady state. At some second noise intensity value the intermittency of the ordered and disordered phases occurs. The increasing noise intensity leads to the fact that the order and disorder alternate increasingly.

    Thus, the scenario of the noise induced order–disorder transition in the system under study consists in the intermittency of the ordered and disordered phases.

    Views (last year): 7.
  10. Yakovleva T.V.
    Signal and noise parameters’ determination at rician data analysis by method of moments of lower odd orders
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 717-728

    The paper develops a new mathematical method of the joint signal and noise parameters determination at the Rice statistical distribution by method of moments based upon the analysis of data for the 1-st and the 3-rd raw moments of the random rician value. The explicit equations’ system have been obtained for required parameters of the signal and noise. In the limiting case of the small value of the signal-to-noise ratio the analytical formulas have been derived that allow calculating the required parameters without the necessity of solving the equations numerically. The technique having been elaborated in the paper ensures an efficient separation of the informative and noise components of the data to be analyzed without any a-priori restrictions, just based upon the processing of the results of the signal’s sampled measurements. The task is meaningful for the purposes of the rician data processing, in particular in the systems of magnetic-resonance visualization, in ultrasound visualization systems, at the optical signals’ analysis in range measuring systems, in radio location, etc. The results of the investigation have shown that the two parameter task solution of the proposed technique does not lead to the increase in demanded volume of computing resources compared with the one parameter task being solved in approximation that the second parameter of the task is known a-priori There are provided the results of the elaborated technique’s computer simulation. The results of the signal and noise parameters’ numerical calculation have confirmed the efficiency of the elaborated technique. There has been conducted the comparison of the accuracy of the sought-for parameters estimation by the technique having been developed in this paper and by the previously elaborated method of moments based upon processing the measured data for lower even moments of the signal to be analyzed.

    Views (last year): 10. Citations: 1 (RSCI).
Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"