All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Signal and noise parameters’ determination at rician data analysis by method of moments of lower odd orders
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 717-728Views (last year): 10. Citations: 1 (RSCI).The paper develops a new mathematical method of the joint signal and noise parameters determination at the Rice statistical distribution by method of moments based upon the analysis of data for the 1-st and the 3-rd raw moments of the random rician value. The explicit equations’ system have been obtained for required parameters of the signal and noise. In the limiting case of the small value of the signal-to-noise ratio the analytical formulas have been derived that allow calculating the required parameters without the necessity of solving the equations numerically. The technique having been elaborated in the paper ensures an efficient separation of the informative and noise components of the data to be analyzed without any a-priori restrictions, just based upon the processing of the results of the signal’s sampled measurements. The task is meaningful for the purposes of the rician data processing, in particular in the systems of magnetic-resonance visualization, in ultrasound visualization systems, at the optical signals’ analysis in range measuring systems, in radio location, etc. The results of the investigation have shown that the two parameter task solution of the proposed technique does not lead to the increase in demanded volume of computing resources compared with the one parameter task being solved in approximation that the second parameter of the task is known a-priori There are provided the results of the elaborated technique’s computer simulation. The results of the signal and noise parameters’ numerical calculation have confirmed the efficiency of the elaborated technique. There has been conducted the comparison of the accuracy of the sought-for parameters estimation by the technique having been developed in this paper and by the previously elaborated method of moments based upon processing the measured data for lower even moments of the signal to be analyzed.
-
Prediction of moving and unexpected motionless bottlenecks based on three-phase traffic theory
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 319-363We present a simulation methodology for the prediction of ЃgunexpectedЃh bottlenecks, i.e., the bottlenecks that occur suddenly and unexpectedly for drivers on a highway. Such unexpected bottlenecks can be either a moving bottleneck (MB) caused by a slow moving vehicle or a motionless bottleneck caused by a stopped vehicle (SV). Based on simulations of a stochastic microscopic traffic flow model in the framework of KernerЃfs three-phase traffic theory, we show that through the use of a small share of probe vehicles (FCD) randomly distributed in traffic flow the reliable prediction of ЃgunexpectedЃh bottlenecks is possible. We have found that the time dependence of the probability of MB and SV prediction as well as the accuracy of the estimation of MB and SV location depend considerably on sequences of phase transitions from free flow (F) to synchronized flow (S) (F→S transition) and back from synchronized flow to free flow (S→F transition) as well as on speed oscillations in synchronized flow at the bottleneck. In the simulation approach, the identification of F→S and S→F transitions at an unexpected bottleneck has been made in accordance with Kerner's three-phase traffic theory. The presented simulation methodology allows us both the prediction of the unexpected bottleneck that suddenly occurs on a highway and the distinguishing of the origin of the unexpected bottleneck, i.e., whether the unexpected bottleneck has occurred due to a MB or a SV.
-
A modified model of the effect of stress concentration near a broken fiber on the tensile strength of high-strength composites (MLLS-6)
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 559-573The article proposes a model for assessing the potential strength of a composite material based on modern fibers with brittle fracture.
Materials consisting of parallel cylindrical fibers that are quasi-statically stretched in one direction are simulated. It is assumed that the sample is not less than 100 pieces, which corresponds to almost significant cases. It is known that the fibers have a distribution of ultimate deformation in the sample and are not destroyed at the same moment. Usually the distribution of their properties is described by the Weibull–Gnedenko statistical distribution. To simulate the strength of the composite, a model of fiber breaks accumulation is used. It is assumed that the fibers united by the polymer matrix are crushed to twice the inefficient length — the distance at which the stresses increase from the end of the broken fiber to the middle one. However, this model greatly overestimates the strength of composites with brittle fibers. For example, carbon and glass fibers are destroyed in this way.
In some cases, earlier attempts were made to take into account the stress concentration near the broken fiber (Hedgepest model, Ermolenko model, shear analysis), but such models either required a lot of initial data or did not coincide with the experiment. In addition, such models idealize the packing of fibers in the composite to the regular hexagonal packing.
The model combines the shear analysis approach to stress distribution near the destroyed fiber and the statistical approach of fiber strength based on the Weibull–Gnedenko distribution, while introducing a number of assumptions that simplify the calculation without loss of accuracy.
It is assumed that the stress concentration on the adjacent fiber increases the probability of its destruction in accordance with the Weibull distribution, and the number of such fibers with an increased probability of destruction is directly related to the number already destroyed before. All initial data can be obtained from simple experiments. It is shown that accounting for redistribution only for the nearest fibers gives an accurate forecast.
This allowed a complete calculation of the strength of the composite. The experimental data obtained by us on carbon fibers, glass fibers and model composites based on them (CFRP, GFRP), confirm some of the conclusions of the model.
-
Determining the characteristics of a random process by comparing them with values based on models of distribution laws
Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1105-1118The effectiveness of communication and data transmission systems (CSiPS), which are an integral part of modern systems in almost any field of science and technology, largely depends on the stability of the frequency of the generated signals. The signals generated in the CSiPD can be considered as processes, the frequency of which changes under the influence of a combination of external influences. Changing the frequency of the signals leads to a decrease in the signal-tonoise ratio (SNR) and, consequently, a deterioration in the characteristics of the signal-to-noise ratio, such as the probability of a bit error and bandwidth. It is most convenient to consider the description of such changes in the frequency of signals as random processes, the apparatus of which is widely used in the construction of mathematical models describing the functioning of systems and devices in various fields of science and technology. Moreover, in many cases, the characteristics of a random process, such as the distribution law, mathematical expectation, and variance, may be unknown or known with errors that do not allow us to obtain estimates of the signal parameters that are acceptable in accuracy. The article proposes an algorithm for solving the problem of determining the characteristics of a random process (signal frequency) based on a set of samples of its frequency, allowing to determine the sample mean, sample variance and the distribution law of frequency deviations in the general population. The basis of this algorithm is the comparison of the values of the observed random process measured over a certain time interval with a set of the same number of random values formed on the basis of model distribution laws. Distribution laws based on mathematical models of these systems and devices or corresponding to similar systems and devices can be considered as model distribution laws. When forming a set of random values for the accepted model distribution law, the sample mean value and variance obtained from the measurement results of the observed random process are used as mathematical expectation and variance. The feature of the algorithm is to compare the measured values of the observed random process ordered in ascending or descending order and the generated sets of values in accordance with the accepted models of distribution laws. The results of mathematical modeling illustrating the application of this algorithm are presented.
-
Machine learning interpretation of inter-well radiowave survey data
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 675-684Views (last year): 3.Traditional geological search methods going to be ineffective. The exploration depth of kimberlite bodies and ore deposits has increased significantly. The only direct exploration method is to drill a system of wells to the depths that provide access to the enclosing rocks. Due to the high cost of drilling, the role of inter-well survey methods has increased. They allows to increase the mean well spacing without significantly reducing the kimberlite or ore body missing probability. The method of inter-well radio wave survey is effective to search for high contrast conductivity objects. The physics of the method based on the dependence of the electromagnetic wave propagation on the propagation medium conductivity. The source and receiver of electromagnetic radiation is an electric dipole, they are placed in adjacent wells. The distance between the source and receiver is known. Therefore we could estimate the medium absorption coefficient by the rate of radio wave amplitude decrease. Low electrical resistance rocks corresponds to high absorption of radio waves. The inter-well measurement data allows to estimate an effective electrical resistance (or conductivity) of the rock. Typically, the source and receiver are immersed in adjacent wells synchronously. The value of the of the electric field amplitude measured at the receiver site allows to estimate the average value of the attenuation coefficient on the line connecting the source and receiver. The measurements are taken during stops, approximately every 5 m. The distance between stops is much less than the distance between adjacent wells. This leads to significant spatial anisotropy in the measured data distribution. Drill grid covers a large area, and our point is to build a three-dimensional model of the distribution of the electrical properties of the inter-well space throughout the whole area. The anisotropy of spatial distribution makes hard to the use of standard geostatistics approach. To build a three-dimensional model of attenuation coefficient, we used one of machine learning theory methods, the method of nearest neighbors. In this method, the value of the absorption coefficient at a given point is calculated by $k$ nearest measurements. The number $k$ should be determined from additional reasons. The spatial distribution anisotropy effect can be reduced by changing the spatial scale in the horizontal direction. The scale factor $\lambda$ is one yet external parameter of the problem. To select the parameters $k$ and $\lambda$ values we used the determination coefficient. To demonstrate the absorption coefficient three-dimensional image construction we apply the procedure to the inter-well radio wave survey data. The data was obtained at one of the sites in Yakutia.
-
On the kinetics of entropy of a system with discrete microscopic states
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1207-1236An isolated system, which possesses a discrete set of microscopic states, is considered. The system performs spontaneous random transitions between the microstates. Kinetic equations for the probabilities of the system staying in various microstates are formulated. A general dimensionless expression for entropy of such a system, which depends on the probability distribution, is considered. Two problems are stated: 1) to study the effect of possible unequal probabilities of different microstates, in particular, when the system is in its internal equilibrium, on the system entropy value, and 2) to study the kinetics of microstate probability distribution and entropy evolution of the system in nonequilibrium states. The kinetics for the rates of transitions between the microstates is assumed to be first-order. Two variants of the effects of possible nonequiprobability of the microstates are considered: i) the microstates form two subgroups the probabilities of which are similar within each subgroup but differ between the subgroups, and ii) the microstate probabilities vary arbitrarily around the point at which they are all equal. It is found that, under a fixed total number of microstates, the deviations of entropy from the value corresponding to the equiprobable microstate distribution are extremely small. The latter is a rigorous substantiation of the known hypothesis about the equiprobability of microstates under the thermodynamic equilibrium. On the other hand, based on several characteristic examples, it is shown that the structure of random transitions between the microstates exerts a considerable effect on the rate and mode of the establishment of the system internal equilibrium, on entropy time dependence and expression of the entropy production rate. Under definite schemes of these transitions, there are possibilities of fast and slow components in the transients and of the existence of transients in the form of damped oscillations. The condition of universality and stability of equilibrium microstate distribution is that for any pair of microstates, a sequence of transitions should exist, which provides the passage from one microstate to next, and, consequently, any microstate traps should be absent.
-
Analysis of additive and parametric noise effects on Morris – Lecar neuron model
Computer Research and Modeling, 2017, v. 9, no. 3, pp. 449-468Views (last year): 11.This paper is devoted to the analysis of the effect of additive and parametric noise on the processes occurring in the nerve cell. This study is carried out on the example of the well-known Morris – Lecar model described by the two-dimensional system of ordinary differential equations. One of the main properties of the neuron is the excitability, i.e., the ability to respond to external stimuli with an abrupt change of the electric potential on the cell membrane. This article considers a set of parameters, wherein the model exhibits the class 2 excitability. The dynamics of the system is studied under variation of the external current parameter. We consider two parametric zones: the monostability zone, where a stable equilibrium is the only attractor of the deterministic system, and the bistability zone, characterized by the coexistence of a stable equilibrium and a limit cycle. We show that in both cases random disturbances result in the phenomenon of the stochastic generation of mixed-mode oscillations (i. e., alternating oscillations of small and large amplitudes). In the monostability zone this phenomenon is associated with a high excitability of the system, while in the bistability zone, it occurs due to noise-induced transitions between attractors. This phenomenon is confirmed by changes of probability density functions for distribution of random trajectories, power spectral densities and interspike intervals statistics. The action of additive and parametric noise is compared. We show that under the parametric noise, the stochastic generation of mixed-mode oscillations is observed at lower intensities than under the additive noise. For the quantitative analysis of these stochastic phenomena we propose and apply an approach based on the stochastic sensitivity function technique and the method of confidence domains. In the case of a stable equilibrium, this confidence domain is an ellipse. For the stable limit cycle, this domain is a confidence band. The study of the mutual location of confidence bands and the boundary separating the basins of attraction for different noise intensities allows us to predict the emergence of noise-induced transitions. The effectiveness of this analytical approach is confirmed by the good agreement of theoretical estimations with results of direct numerical simulations.
-
System modeling, risks evaluation and optimization of a distributed computer system
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1349-1359The article deals with the problem of a distributed system operation reliability. The system core is an open integration platform that provides interaction of varied software for modeling gas transportation. Some of them provide an access through thin clients on the cloud technology “software as a service”. Mathematical models of operation, transmission and computing are to ensure the operation of an automated dispatching system for oil and gas transportation. The paper presents a system solution based on the theory of Markov random processes and considers the stable operation stage. The stationary operation mode of the Markov chain with continuous time and discrete states is described by a system of Chapman–Kolmogorov equations with respect to the average numbers (mathematical expectations) of the objects in certain states. The objects of research are both system elements that are present in a large number – thin clients and computing modules, and individual ones – a server, a network manager (message broker). Together, they are interacting Markov random processes. The interaction is determined by the fact that the transition probabilities in one group of elements depend on the average numbers of other elements groups.
The authors propose a multi-criteria dispersion model of risk assessment for such systems (both in the broad and narrow sense, in accordance with the IEC standard). The risk is the standard deviation of estimated object parameter from its average value. The dispersion risk model makes possible to define optimality criteria and whole system functioning risks. In particular, for a thin client, the following is calculated: the loss profit risk, the total risk of losses due to non-productive element states, and the total risk of all system states losses.
Finally the paper proposes compromise schemes for solving the multi-criteria problem of choosing the optimal operation strategy based on the selected set of compromise criteria.
-
Image classification based on deep learning with automatic relevance determination and structured Bayesian pruning
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 927-938Deep learning’s power stems from complex architectures; however, these can lead to overfitting, where models memorize training data and fail to generalize to unseen examples. This paper proposes a novel probabilistic approach to mitigate this issue. We introduce two key elements: Truncated Log-Uniform Prior and Truncated Log-Normal Variational Approximation, and Automatic Relevance Determination (ARD) with Bayesian Deep Neural Networks (BDNNs). Within the probabilistic framework, we employ a specially designed truncated log-uniform prior for noise. This prior acts as a regularizer, guiding the learning process towards simpler solutions and reducing overfitting. Additionally, a truncated log-normal variational approximation is used for efficient handling of the complex probability distributions inherent in deep learning models. ARD automatically identifies and removes irrelevant features or weights within a model. By integrating ARD with BDNNs, where weights have a probability distribution, we achieve a variational bound similar to the popular variational dropout technique. Dropout randomly drops neurons during training, encouraging the model not to rely heavily on any single feature. Our approach with ARD achieves similar benefits without the randomness of dropout, potentially leading to more stable training.
To evaluate our approach, we have tested the model on two datasets: the Canadian Institute For Advanced Research (CIFAR-10) for image classification and a dataset of Macroscopic Images of Wood, which is compiled from multiple macroscopic images of wood datasets. Our method is applied to established architectures like Visual Geometry Group (VGG) and Residual Network (ResNet). The results demonstrate significant improvements. The model reduced overfitting while maintaining, or even improving, the accuracy of the network’s predictions on classification tasks. This validates the effectiveness of our approach in enhancing the performance and generalization capabilities of deep learning models.
-
Impact of spatial resolution on mobile robot path optimality in two-dimensional lattice models
Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1131-1148This paper examines the impact of the spatial resolution of a discretized (lattice) representation of the environment on the efficiency and correctness of optimal pathfinding in complex environments. Scenarios are considered that may include bottlenecks, non-uniform obstacle distributions, and areas of increased safety requirements in the immediate vicinity of obstacles. Despite the widespread use of lattice representations of the environment in robotics due to their compatibility with sensor data and support for classical trajectory planning algorithms, the resolution of these lattices has a significant impact on both goal reachability and optimal path performance. An algorithm is proposed that combines environmental connectivity analysis, trajectory optimization, and geometric safety refinement. In the first stage, the Leath algorithm is used to estimate the reachability of the target point by identifying a connected component containing the starting position. Upon confirmation of the target point’s reachability, the A* algorithm is applied to the nodes of this component in the second stage to construct a path that simultaneously minimizes both the path length and the risk of collision. In the third stage, a refined obstacle distance estimate is performed for nodes located in safety zones using a combination of the Gilbert – Johnson –Keerthi (GJK) and expanding polyhedron (EPA) algorithms. Experimental analysis revealed a nonlinear relationship between the probability of the existence and effectiveness of an optimal path and the lattice parameters. Specifically, reducing the spatial resolution of the lattice increases the likelihood of connectivity loss and target unreachability, while increasing its spatial resolution increases computational complexity without a proportional improvement in the optimal path’s performance.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




