All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Theoretical substantiation of the mathematical techniques for joint signal and noise estimation at rician data analysis
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 445-473Views (last year): 2. Citations: 2 (RSCI).The paper provides a solution of the two-parameter task of joint signal and noise estimation at data analysis within the conditions of the Rice distribution by the techniques of mathematical statistics: the maximum likelihood method and the variants of the method of moments. The considered variants of the method of moments include the following techniques: the joint signal and noise estimation on the basis of measuring the 2-nd and the 4-th moments (MM24) and on the basis of measuring the 1-st and the 2-nd moments (MM12). For each of the elaborated methods the explicit equations’ systems have been obtained for required parameters of the signal and noise. An important mathematical result of the investigation consists in the fact that the solution of the system of two nonlinear equations with two variables — the sought for signal and noise parameters — has been reduced to the solution of just one equation with one unknown quantity what is important from the view point of both the theoretical investigation of the proposed technique and its practical application, providing the possibility of essential decreasing the calculating resources required for the technique’s realization. The implemented theoretical analysis has resulted in an important practical conclusion: solving the two-parameter task does not lead to the increase of required numerical resources if compared with the one-parameter approximation. The task is meaningful for the purposes of the rician data processing, in particular — the image processing in the systems of magnetic-resonance visualization. The theoretical conclusions have been confirmed by the results of the numerical experiment.
-
Isotropic Multidimensional Catalytic Branching Random Walk with Regularly Varying Tails
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1033-1039The study completes a series of the author’s works devoted to the spread of particles population in supercritical catalytic branching random walk (CBRW) on a multidimensional lattice. The CBRW model describes the evolution of a system of particles combining their random movement with branching (reproduction and death) which only occurs at fixed points of the lattice. The set of such catalytic points is assumed to be finite and arbitrary. In the supercritical regime the size of population, initiated by a parent particle, increases exponentially with positive probability. The rate of the spread depends essentially on the distribution tails of the random walk jump. If the jump distribution has “light tails”, the “population front”, formed by the particles most distant from the origin, moves linearly in time and the limiting shape of the front is a convex surface. When the random walk jump has independent coordinates with a semiexponential distribution, the population spreads with a power rate in time and the limiting shape of the front is a star-shape nonconvex surface. So far, for regularly varying tails (“heavy” tails), we have considered the problem of scaled front propagation assuming independence of components of the random walk jump. Now, without this hypothesis, we examine an “isotropic” case, when the rate of decay of the jumps distribution in different directions is given by the same regularly varying function. We specify the probability that, for time going to infinity, the limiting random set formed by appropriately scaled positions of population particles belongs to a set $B$ containing the origin with its neighborhood, in $\mathbb{R}^d$. In contrast to the previous results, the random cloud of particles with normalized positions in the time limit will not concentrate on coordinate axes with probability one.
Keywords: catalytic branching random walk, spread of population. -
Statistical distribution of the quasi-harmonic signal’s phase: basics of theory and computer simulation
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 287-297The paper presents the results of the fundamental research directed on the theoretical study and computer simulation of peculiarities of the quasi-harmonic signal’s phase statistical distribution. The quasi-harmonic signal is known to be formed as a result of the Gaussian noise impact on the initially harmonic signal. By means of the mathematical analysis the formulas have been obtained in explicit form for the principle characteristics of this distribution, namely: for the cumulative distribution function, the probability density function, the likelihood function. As a result of the conducted computer simulation the dependencies of these functions on the phase distribution parameters have been analyzed. The paper elaborates the methods of estimating the phase distribution parameters which contain the information about the initial, undistorted signal. It has been substantiated that the task of estimating the initial value of the phase of quasi-harmonic signal can be efficiently solved by averaging the results of the sampled measurements. As for solving the task of estimating the second parameter of the phase distribution, namely — the parameter, determining the signal level respectively the noise level — a maximum likelihood technique is proposed to be applied. The graphical illustrations are presented that have been obtained by means of the computer simulation of the principle characteristics of the phase distribution under the study. The existence and uniqueness of the likelihood function’s maximum allow substantiating the possibility and the efficiency of solving the task of estimating signal’s level relative to noise level by means of the maximum likelihood technique. The elaborated method of estimating the un-noised signal’s level relative to noise, i. e. the parameter characterizing the signal’s intensity on the basis of measurements of the signal’s phase is an original and principally new technique which opens perspectives of usage of the phase measurements as a tool of the stochastic data analysis. The presented investigation is meaningful for solving the task of determining the phase and the signal’s level by means of the statistical processing of the sampled phase measurements. The proposed methods of the estimation of the phase distribution’s parameters can be used at solving various scientific and technological tasks, in particular, in such areas as radio-physics, optics, radiolocation, radio-navigation, metrology.
-
Critical rate of computing net increase for providing the infinity faultless work
Computer Research and Modeling, 2009, v. 1, no. 1, pp. 33-39Fault-tolerance of a finite computing net with arbitrary graph, containing elements with certain probability of fault and restore, is analyzed. Algorithm for net growth at each work cycle is suggested. It is shown that if the rate of net increase is sufficiently big then the probability of infinity faultless work is positive. Estimated critical net increase rate is logarithmic over the number of work cycles.
-
Reduction of decision rule of multivariate interpolation and approximation method in the problem of data classification
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 475-484Views (last year): 5.This article explores a method of machine learning based on the theory of random functions. One of the main problems of this method is that decision rule of a model becomes more complicated as the number of training dataset examples increases. The decision rule of the model is the most probable realization of a random function and it's represented as a polynomial with the number of terms equal to the number of training examples. In this article we will show the quick way of the number of training dataset examples reduction and, accordingly, the complexity of the decision rule. Reducing the number of examples of training dataset is due to the search and removal of weak elements that have little effect on the final form of the decision function, and noise sampling elements. For each $(x_i,y_i)$-th element sample was introduced the concept of value, which is expressed by the deviation of the estimated value of the decision function of the model at the point $x_i$, built without the $i$-th element, from the true value $y_i$. Also we show the possibility of indirect using weak elements in the process of training model without increasing the number of terms in the decision function. At the experimental part of the article, we show how changed amount of data affects to the ability of the method of generalizing in the classification task.
-
Investigation of Turing structures formation under the influence of wave instability
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 397-412Views (last year): 21.A classical for nonlinear dynamics model, Brusselator, is considered, being augmented by addition of a third variable, which plays the role of a fast-diffusing inhibitor. The model is investigated in one-dimensional case in the parametric domain, where two types of diffusive instabilities of system’s homogeneous stationary state are manifested: wave instability, which leads to spontaneous formation of autowaves, and Turing instability, which leads to spontaneous formation of stationary dissipative structures, or Turing structures. It is shown that, due to the subcritical nature of Turing bifurcation, the interaction of two instabilities in this system results in spontaneous formation of stationary dissipative structures already before the passage of Turing bifurcation. In response to different perturbations of spatially uniform stationary state, different stable regimes are manifested in the vicinity of the double bifurcation point in the parametric region under study: both pure regimes, which consist of either stationary or autowave dissipative structures; and mixed regimes, in which different modes dominate in different areas of the computational space. In the considered region of the parametric space, the system is multistable and exhibits high sensitivity to initial noise conditions, which leads to blurring of the boundaries between qualitatively different regimes in the parametric region. At that, even in the area of dominance of mixed modes with prevalence of Turing structures, the establishment of a pure autowave regime has significant probability. In the case of stable mixed regimes, a sufficiently strong local perturbation in the area of the computational space, where autowave mode is manifested, can initiate local formation of new stationary dissipative structures. Local perturbation of the stationary homogeneous state in the parametric region under investidation leads to a qualitatively similar map of established modes, the zone of dominance of pure autowave regimes being expanded with the increase of local perturbation amplitude. In two-dimensional case, mixed regimes turn out to be only transient — upon the appearance of localized Turing structures under the influence of wave regime, they eventually occupy all available space.
-
Analysis of the basic equation of the physical and statistical approach within reliability theory of technical systems
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 721-735Verification of the physical-statistical approach within reliability theory for the simplest cases was carried out, which showed its validity. An analytical solution of the one-dimensional basic equation of the physicalstatistical approach is presented under the assumption of a stationary degradation rate. From a mathematical point of view this equation is the well-known continuity equation, where the role of density is played by the density distribution function of goods in its characteristics phase space, and the role of fluid velocity is played by intensity (rate) degradation processes. The latter connects the general formalism with the specifics of degradation mechanisms. The cases of coordinate constant, linear and quadratic degradation rates are analyzed using the characteristics method. In the first two cases, the results correspond to physical intuition. At a constant rate of degradation, the shape of the initial distribution is preserved, and the distribution itself moves equably from the zero. At a linear rate of degradation, the distribution either narrows down to a narrow peak (in the singular limit), or expands, with the maximum shifting to the periphery at an exponentially increasing rate. The distribution form is also saved up to the parameters. For the initial normal distribution, the coordinates of the largest value of the distribution maximum for its return motion are obtained analytically.
In the quadratic case, the formal solution demonstrates counterintuitive behavior. It consists in the fact that the solution is uniquely defined only on a part of an infinite half-plane, vanishes along with all derivatives on the boundary, and is ambiguous when crossing the boundary. If you continue it to another area in accordance with the analytical solution, it has a two-humped appearance, retains the amount of substance and, which is devoid of physical meaning, periodically over time. If you continue it with zero, then the conservativeness property is violated. The anomaly of the quadratic case is explained, though not strictly, by the analogy of the motion of a material point with an acceleration proportional to the square of velocity. Here we are dealing with a mathematical curiosity. Numerical calculations are given for all cases. Additionally, the entropy of the probability distribution and the reliability function are calculated, and their correlation is traced.
-
Estimation of the probability of spontaneous synthesis of computational structures in relation to the implementation of parallel information processing
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 677-696We consider a model of spontaneous formation of a computational structure in the human brain for solving a given class of tasks in the process of performing a series of similar tasks. The model is based on a special definition of a numerical measure of the complexity of the solution algorithm. This measure has an informational property: the complexity of a computational structure consisting of two independent structures is equal to the sum of the complexities of these structures. Then the probability of spontaneous occurrence of the structure depends exponentially on the complexity of the structure. The exponential coefficient requires experimental determination for each type of problem. It may depend on the form of presentation of the source data and the procedure for issuing the result. This estimation method was applied to the results of a series of experiments that determined the strategy for solving a series of similar problems with a growing number of initial data. These experiments were described in previously published papers. Two main strategies were considered: sequential execution of the computational algorithm, or the use of parallel computing in those tasks where it is effective. These strategies differ in how calculations are performed. Using an estimate of the complexity of schemes, you can use the empirical probability of one of the strategies to calculate the probability of the other. The calculations performed showed a good match between the calculated and empirical probabilities. This confirms the hypothesis about the spontaneous formation of structures that solve the problem during the initial training of a person. The paper contains a brief description of experiments, detailed computational schemes and a strict definition of the complexity measure of computational structures and the conclusion of the dependence of the probability of structure formation on its complexity.
-
Ellipsoid method for convex stochastic optimization in small dimension
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1137-1147The article considers minimization of the expectation of convex function. Problems of this type often arise in machine learning and a variety of other applications. In practice, stochastic gradient descent (SGD) and similar procedures are usually used to solve such problems. We propose to use the ellipsoid method with mini-batching, which converges linearly and can be more efficient than SGD for a class of problems. This is verified by our experiments, which are publicly available. The algorithm does not require neither smoothness nor strong convexity of the objective to achieve linear convergence. Thus, its complexity does not depend on the conditional number of the problem. We prove that the method arrives at an approximate solution with given probability when using mini-batches of size proportional to the desired accuracy to the power −2. This enables efficient parallel execution of the algorithm, whereas possibilities for batch parallelization of SGD are rather limited. Despite fast convergence, ellipsoid method can result in a greater total number of calls to oracle than SGD, which works decently with small batches. Complexity is quadratic in dimension of the problem, hence the method is suitable for relatively small dimensionalities.
-
Method for processing acoustic emission testing data to define signal velocity and location
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1029-1040Non-destructive acoustic emission testing is an effective and cost-efficient way to examine pressure vessels for hidden defects (cracks, laminations etc.), as well as the only method that is sensitive to developing defects. The sound velocity in the test object and its adequate definition in the location scheme are of paramount importance for the accurate detection of the acoustic emission source. The acoustic emission data processing method proposed herein comprises a set of numerical methods and allows defining the source coordinates and the most probable velocity for each signal. The method includes pre-filtering of data by amplitude, by time differences, elimination of electromagnetic interference. Further, a set of numerical methods is applied to them to solve the system of nonlinear equations, in particular, the Newton – Kantorovich method and the general iterative process. The velocity of a signal from one source is assumed as a constant in all directions. As the initial approximation is taken the center of gravity of the triangle formed by the first three sensors that registered the signal. The method developed has an important practical application, and the paper provides an example of its approbation in the calibration of an acoustic emission system at a production facility (hydrocarbon gas purification absorber). Criteria for prefiltering of data are described. The obtained locations are in good agreement with the signal generation sources, and the velocities even reflect the Rayleigh-Lamb division of acoustic waves due to the different signal source distances from the sensors. The article contains the dependency graph of the average signal velocity against the distance from its source to the nearest sensor. The main advantage of the method developed is its ability to detect the location of different velocity signals within a single test. This allows to increase the degree of freedom in the calculations, and thereby increase their accuracy.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"