All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Review of MRI processing techniques and elaboration of a new two-parametric method of moments
Computer Research and Modeling, 2014, v. 6, no. 2, pp. 231-244Citations: 10 (RSCI).The paper provides a review of the existing methods of signals’ processing within the conditions of the Rice statistical model applicability. There are considered the principle development directions, the existing limitations and the improvement possibilities concerning the methods of solving the tasks of noise suppression and analyzed signals’ filtration by the example of magnetic-resonance visualization. A conception of a new approach to joint calculation of Rician signal’s both parameters has been developed based on the method of moments in two variants of its implementation. The computer simulation and the comparative analysis of the obtained numerical results have been conducted.
-
Hierarchical method for mathematical modeling of stochastic thermal processes in complex electronic systems
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 613-630Views (last year): 3.A hierarchical method of mathematical and computer modeling of interval-stochastic thermal processes in complex electronic systems for various purposes is developed. The developed concept of hierarchical structuring reflects both the constructive hierarchy of a complex electronic system and the hierarchy of mathematical models of heat exchange processes. Thermal processes that take into account various physical phenomena in complex electronic systems are described by systems of stochastic, unsteady, and nonlinear partial differential equations and, therefore, their computer simulation encounters considerable computational difficulties even with the use of supercomputers. The hierarchical method avoids these difficulties. The hierarchical structure of the electronic system design, in general, is characterized by five levels: Level 1 — the active elements of the ES (microcircuits, electro-radio-elements); Level 2 — electronic module; Level 3 — a panel that combines a variety of electronic modules; Level 4 — a block of panels; Level 5 — stand installed in a stationary or mobile room. The hierarchy of models and modeling of stochastic thermal processes is constructed in the reverse order of the hierarchical structure of the electronic system design, while the modeling of interval-stochastic thermal processes is carried out by obtaining equations for statistical measures. The hierarchical method developed in the article allows to take into account the principal features of thermal processes, such as the stochastic nature of thermal, electrical and design factors in the production, assembly and installation of electronic systems, stochastic scatter of operating conditions and the environment, non-linear temperature dependencies of heat exchange factors, unsteady nature of thermal processes. The equations obtained in the article for statistical measures of stochastic thermal processes are a system of 14 non-stationary nonlinear differential equations of the first order in ordinary derivatives, whose solution is easily implemented on modern computers by existing numerical methods. The results of applying the method for computer simulation of stochastic thermal processes in electron systems are considered. The hierarchical method is applied in practice for the thermal design of real electronic systems and the creation of modern competitive devices.
-
Analysis of the basic equation of the physical and statistical approach within reliability theory of technical systems
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 721-735Verification of the physical-statistical approach within reliability theory for the simplest cases was carried out, which showed its validity. An analytical solution of the one-dimensional basic equation of the physicalstatistical approach is presented under the assumption of a stationary degradation rate. From a mathematical point of view this equation is the well-known continuity equation, where the role of density is played by the density distribution function of goods in its characteristics phase space, and the role of fluid velocity is played by intensity (rate) degradation processes. The latter connects the general formalism with the specifics of degradation mechanisms. The cases of coordinate constant, linear and quadratic degradation rates are analyzed using the characteristics method. In the first two cases, the results correspond to physical intuition. At a constant rate of degradation, the shape of the initial distribution is preserved, and the distribution itself moves equably from the zero. At a linear rate of degradation, the distribution either narrows down to a narrow peak (in the singular limit), or expands, with the maximum shifting to the periphery at an exponentially increasing rate. The distribution form is also saved up to the parameters. For the initial normal distribution, the coordinates of the largest value of the distribution maximum for its return motion are obtained analytically.
In the quadratic case, the formal solution demonstrates counterintuitive behavior. It consists in the fact that the solution is uniquely defined only on a part of an infinite half-plane, vanishes along with all derivatives on the boundary, and is ambiguous when crossing the boundary. If you continue it to another area in accordance with the analytical solution, it has a two-humped appearance, retains the amount of substance and, which is devoid of physical meaning, periodically over time. If you continue it with zero, then the conservativeness property is violated. The anomaly of the quadratic case is explained, though not strictly, by the analogy of the motion of a material point with an acceleration proportional to the square of velocity. Here we are dealing with a mathematical curiosity. Numerical calculations are given for all cases. Additionally, the entropy of the probability distribution and the reliability function are calculated, and their correlation is traced.
-
Estimation of the probability of spontaneous synthesis of computational structures in relation to the implementation of parallel information processing
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 677-696We consider a model of spontaneous formation of a computational structure in the human brain for solving a given class of tasks in the process of performing a series of similar tasks. The model is based on a special definition of a numerical measure of the complexity of the solution algorithm. This measure has an informational property: the complexity of a computational structure consisting of two independent structures is equal to the sum of the complexities of these structures. Then the probability of spontaneous occurrence of the structure depends exponentially on the complexity of the structure. The exponential coefficient requires experimental determination for each type of problem. It may depend on the form of presentation of the source data and the procedure for issuing the result. This estimation method was applied to the results of a series of experiments that determined the strategy for solving a series of similar problems with a growing number of initial data. These experiments were described in previously published papers. Two main strategies were considered: sequential execution of the computational algorithm, or the use of parallel computing in those tasks where it is effective. These strategies differ in how calculations are performed. Using an estimate of the complexity of schemes, you can use the empirical probability of one of the strategies to calculate the probability of the other. The calculations performed showed a good match between the calculated and empirical probabilities. This confirms the hypothesis about the spontaneous formation of structures that solve the problem during the initial training of a person. The paper contains a brief description of experiments, detailed computational schemes and a strict definition of the complexity measure of computational structures and the conclusion of the dependence of the probability of structure formation on its complexity.
-
Contact specificity in protein-DNA complexes
Computer Research and Modeling, 2009, v. 1, no. 3, pp. 281-286In this work we investigated contacts between proteins and nucleic acids by Voronoi-Delaunay tessellation. About one third of all contacts are contacts between nucleotides and positively charged residues Arg and Lys, 32,3 %. Ser and Thr together take part in 15 % of contacts. Asn forms 6 % of contacts, as well as Gly. Contribution of each other residue type does not exceed 5 %. Statistically significant are contacts Asp-G, Trp-C, Glu-C, Asp-C and His-T. General mechanism of charged residues participation in specific protein-DNA interac-tions is suggested.
Keywords: Voronoi–Delaunay tessellation, protein-DNA complexes.Views (last year): 2. -
Views (last year): 3.
Road network infrastructure is the basis of any urban area. This article compares the structural characteristics (meshedness coefficient, clustering coefficient) road networks of Moscow center (Old Moscow), formed as a result of self-organization and roads near Leninsky Prospekt (postwar Moscow), which was result of cetralized planning. Data for the construction of road networks in the form of graphs taken from the Internet resource OpenStreetMap, allowing to accurately identify the coordinates of the intersections. According to the characteristics of the calculated Moscow road networks areas the cities with road network which have a similar structure to the two Moscow areas was found in foreign publications. Using the dual representation of road networks of centers of Moscow and St. Petersburg, studied the information and cognitive features of navigation in these tourist areas of the two capitals. In the construction of the dual graph of the studied areas were not taken into account the different types of roads (unidirectional or bi-directional traffic, etc), that is built dual graphs are undirected. Since the road network in the dual representation are described by a power law distribution of vertices on the number of edges (scale-free networks), exponents of these distributions were calculated. It is shown that the information complexity of the dual graph of the center of Moscow exceeds the cognitive threshold 8.1 bits, and the same feature for the center of St. Petersburg below this threshold, because the center of St. Petersburg road network was created on the basis of planning and therefore more easy to navigate. In conclusion, using the methods of statistical mechanics (the method of calculating the partition functions) for the road network of some Russian cities the Gibbs entropy were calculated. It was found that with the road network size increasing their entropy decreases. We discuss the problem of studying the evolution of urban infrastructure networks of different nature (public transport, supply , communication networks, etc.), which allow us to more deeply explore and understand the fundamental laws of urbanization.
-
Origin and growth of the disorder within an ordered state of the spatially extended chemical reaction model
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 595-607Views (last year): 7.We now review the main points of mean-field approximation (MFA) in its application to multicomponent stochastic reaction-diffusion systems.
We present the chemical reaction model under study — brusselator. We write the kinetic equations of reaction supplementing them with terms that describe the diffusion of the intermediate components and the fluctuations of the concentrations of the initial products. We simulate the fluctuations as random Gaussian homogeneous and spatially isotropic fields with zero means and spatial correlation functions with a non-trivial structure. The model parameter values correspond to a spatially-inhomogeneous ordered state in the deterministic case.
In the MFA we derive single-site two-dimensional nonlinear self-consistent Fokker–Planck equation in the Stratonovich's interpretation for spatially extended stochastic brusselator, which describes the dynamics of probability distribution density of component concentration values of the system under consideration. We find the noise intensity values appropriate to two types of Fokker–Planck equation solutions: solution with transient bimodality and solution with the multiple alternation of unimodal and bimodal types of probability density. We study numerically the probability density dynamics and time behavior of variances, expectations, and most probable values of component concentrations at various noise intensity values and the bifurcation parameter in the specified region of the problem parameters.
Beginning from some value of external noise intensity inside the ordered phase disorder originates existing for a finite time, and the higher the noise level, the longer this disorder “embryo” lives. The farther away from the bifurcation point, the lower the noise that generates it and the narrower the range of noise intensity values at which the system evolves to the ordered, but already a new statistically steady state. At some second noise intensity value the intermittency of the ordered and disordered phases occurs. The increasing noise intensity leads to the fact that the order and disorder alternate increasingly.
Thus, the scenario of the noise induced order–disorder transition in the system under study consists in the intermittency of the ordered and disordered phases.
-
Signal and noise parameters’ determination at rician data analysis by method of moments of lower odd orders
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 717-728Views (last year): 10. Citations: 1 (RSCI).The paper develops a new mathematical method of the joint signal and noise parameters determination at the Rice statistical distribution by method of moments based upon the analysis of data for the 1-st and the 3-rd raw moments of the random rician value. The explicit equations’ system have been obtained for required parameters of the signal and noise. In the limiting case of the small value of the signal-to-noise ratio the analytical formulas have been derived that allow calculating the required parameters without the necessity of solving the equations numerically. The technique having been elaborated in the paper ensures an efficient separation of the informative and noise components of the data to be analyzed without any a-priori restrictions, just based upon the processing of the results of the signal’s sampled measurements. The task is meaningful for the purposes of the rician data processing, in particular in the systems of magnetic-resonance visualization, in ultrasound visualization systems, at the optical signals’ analysis in range measuring systems, in radio location, etc. The results of the investigation have shown that the two parameter task solution of the proposed technique does not lead to the increase in demanded volume of computing resources compared with the one parameter task being solved in approximation that the second parameter of the task is known a-priori There are provided the results of the elaborated technique’s computer simulation. The results of the signal and noise parameters’ numerical calculation have confirmed the efficiency of the elaborated technique. There has been conducted the comparison of the accuracy of the sought-for parameters estimation by the technique having been developed in this paper and by the previously elaborated method of moments based upon processing the measured data for lower even moments of the signal to be analyzed.
-
Simulation of rail vehicles ride in Simpack Rail on the curved track
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 249-263Views (last year): 20.The paper studies the determination for one of the dynamic quality parameter (PDK) of railway vehicles — car body lateral acceleration — by using of computer simulation system for railway vehicles dynamic Simpack Rail. This provide the complex simulation environment with variable velocity depending on the train schedule. The rail vehicle model of typical 1520 mm gauge fright locomotive section used for simulation has been verified by means of the chair “Electric multiple unit cars and locomotives” in the Russian University of Transport (RUT (MIIT)). Due to this homologation the questions of model creating and verification in preprocessor are excluded in this paper. The paper gives the detail description of cartographic track modeling in situation plane, heights plane and superelevation plane based on the real operating data. The statistic parameters (moments) for the rail related track excitation and used cartographic track data of the specified track section in this simulation are given as a numeric and graphical results of reading the prepared data files. The measurement of the car body residual lateral acceleration occur under consideration of the earth gravity acceleration part like the accelerometer measurement in the real world. Finally the desired quality parameter determined by simulation is compared with the same one given by a test drive. The calculation method in both cases is based on the middle value of the absolute maximums picked up within the nonstationary realizations of this parameter. Compared results confirm that this quality factor all the first depends on the velocity and track geometry properties. The simulation of the track in this application uses the strong conformity original track data of the test ride track section. The accepted simplification in the rail vehicle model of fright electric locomotive section (body properties related to the center of gravity, small displacements between the bodies) by keeping the geometric and force law characteristics of the force elements and constraints constant allow in Simpack Rail the simulation with necessary validity of system behavior (reactions).
-
Estimates of threshold and strength of percolation clusters on square lattices with (1,π)-neighborhood
Computer Research and Modeling, 2014, v. 6, no. 3, pp. 405-414Views (last year): 4. Citations: 5 (RSCI).In this paper we consider statistical estimates of threshold and strength of percolation clusters on square lattices. The percolation threshold pc and the strength of percolation clusters P∞ for a square lattice with (1,π)-neighborhood depends not only on the lattice dimension, but also on the Minkowski exponent d. To estimate the strength of percolation clusters P∞ proposed a new method of averaging the relative frequencies of the target subset of lattice sites. The implementation of this method is based on the SPSL package, released under GNU GPL-3 using the free programming language R.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"