All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Numerical studies of the parameters of the perturbed region formed in the lower ionosphere under the action of a directed radio waves flux from a terrestrial source
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 679-708Views (last year): 17.The paper presents a physico-mathematical model of the perturbed region formed in the lower D-layer of the ionosphere under the action of directed radio emission flux from a terrestrial stand of the megahertz frequency range, obtained as a result of comprehensive theoretical studies. The model is based on the consideration of a wide range of kinetic processes taking into account their nonequilibrium and in the two-temperature approximation for describing the transformation of the radio beam energy absorbed by electrons. The initial data on radio emission achieved by the most powerful radio-heating stands are taken in the paper. Their basic characteristics and principles of functioning, and features of the altitude distribution of the absorbed electromagnetic energy of the radio beam are briefly described. The paper presents the decisive role of the D-layer of the ionosphere in the absorption of the energy of the radio beam. On the basis of theoretical analysis, analytical expressions are obtained for the contribution of various inelastic processes to the distribution of the absorbed energy, which makes it possible to correctly describe the contribution of each of the processes considered. The work considers more than 60 components. The change of the component concentration describe about 160 reactions. All the reactions are divided into five groups according to their physical content: ionization-chemical block, excitation block of metastable electronic states, cluster block, excitation block of vibrational states and block of impurities. Blocks are interrelated and can be calculated both jointly and separately. The paper presents the behavior of the parameters of the perturbed region in daytime and nighttime conditions is significantly different at the same radio flux density: under day conditions, the maximum electron concentration and temperature are at an altitude of ~45–55 km; in night ~80 km, with the temperature of heavy particles rapidly increasing, which leads to the occurrence of a gas-dynamic flow. Therefore, a special numerical algorithm are developed to solve two basic problems: kinetic and gas dynamic. Based on the altitude and temporal behavior of concentrations and temperatures, the algorithm makes it possible to determine the ionization and emission of the ionosphere in the visible and infrared spectral range, which makes it possible to evaluate the influence of the perturbed region on radio engineering and optoelectronic devices used in space technology.
-
System modeling, risks evaluation and optimization of a distributed computer system
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1349-1359The article deals with the problem of a distributed system operation reliability. The system core is an open integration platform that provides interaction of varied software for modeling gas transportation. Some of them provide an access through thin clients on the cloud technology “software as a service”. Mathematical models of operation, transmission and computing are to ensure the operation of an automated dispatching system for oil and gas transportation. The paper presents a system solution based on the theory of Markov random processes and considers the stable operation stage. The stationary operation mode of the Markov chain with continuous time and discrete states is described by a system of Chapman–Kolmogorov equations with respect to the average numbers (mathematical expectations) of the objects in certain states. The objects of research are both system elements that are present in a large number – thin clients and computing modules, and individual ones – a server, a network manager (message broker). Together, they are interacting Markov random processes. The interaction is determined by the fact that the transition probabilities in one group of elements depend on the average numbers of other elements groups.
The authors propose a multi-criteria dispersion model of risk assessment for such systems (both in the broad and narrow sense, in accordance with the IEC standard). The risk is the standard deviation of estimated object parameter from its average value. The dispersion risk model makes possible to define optimality criteria and whole system functioning risks. In particular, for a thin client, the following is calculated: the loss profit risk, the total risk of losses due to non-productive element states, and the total risk of all system states losses.
Finally the paper proposes compromise schemes for solving the multi-criteria problem of choosing the optimal operation strategy based on the selected set of compromise criteria.
-
The two geometric parameters influence study on the hydrostatic problem solution accuracy by the SPH method
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 979-992The two significant geometric parameters are proposed that affect the physical quantities interpolation in the smoothed particle hydrodynamics method (SPH). They are: the smoothing coefficient which the particle size and the smoothing radius are connecting and the volume coefficient which determine correctly the particle mass for a given particles distribution in the medium.
In paper proposes a technique for these parameters influence assessing on the SPH method interpolations accuracy when the hydrostatic problem solving. The analytical functions of the relative error for the density and pressure gradient in the medium are introduced for the accuracy estimate. The relative error functions are dependent on the smoothing factor and the volume factor. Designating a specific interpolation form in SPH method allows the differential form of the relative error functions to the algebraic polynomial form converting. The root of this polynomial gives the smoothing coefficient values that provide the minimum interpolation error for an assigned volume coefficient.
In this work, the derivation and analysis of density and pressure gradient relative errors functions on a sample of popular nuclei with different smoothing radius was carried out. There is no common the smoothing coefficient value for all the considered kernels that provides the minimum error for both SPH interpolations. The nuclei representatives with different smoothing radius are identified which make it possible the smallest errors of SPH interpolations to provide when the hydrostatic problem solving. As well, certain kernels with different smoothing radius was determined which correct interpolation do not allow provide when the hydrostatic problem solving by the SPH method.
-
Technique for analyzing noise-induced phenomena in two-component stochastic systems of reaction – diffusion type with power nonlinearity
Computer Research and Modeling, 2025, v. 17, no. 2, pp. 277-291The paper constructs and studies a generalized model describing two-component systems of reaction – diffusion type with power nonlinearity, considering the influence of external noise. A methodology has been developed for analyzing the generalized model, which includes linear stability analysis, nonlinear stability analysis, and numerical simulation of the system’s evolution. The linear analysis technique uses basic approaches, in which the characteristic equation is obtained using a linearization matrix. Nonlinear stability analysis realized up to third-order moments inclusively. For this, the functions describing the dynamics of the components are expanded in Taylor series up to third-order terms. Then, using the Novikov theorem, the averaging procedure is carried out. As a result, the obtained equations form an infinite hierarchically subordinate structure, which must be truncated at some point. To achieve this, contributions from terms higher than the third order are neglected in both the equations themselves and during the construction of the moment equations. The resulting equations form a set of linear equations, from which the stability matrix is constructed. This matrix has a rather complex structure, making it solvable only numerically. For the numerical study of the system’s evolution, the method of variable directions was chosen. Due to the presence of a stochastic component in the analyzed system, the method was modified such that random fields with a specified distribution and correlation function, responsible for the noise contribution to the overall nonlinearity, are generated across entire layers. The developed methodology was tested on the reaction – diffusion model proposed by Barrio et al., according to the results of the study, they showed the similarity of the obtained structures with the pigmentation of fish. This paper focuses on the system behavior analysis in the neighborhood of a non-zero stationary point. The dependence of the real part of the eigenvalues on the wavenumber has been examined. In the linear analysis, a range of wavenumber values is identified in which Turing instability occurs. Nonlinear analysis and numerical simulation of the system’s evolution are conducted for model parameters that, in contrast, lie outside the Turing instability region. Nonlinear analysis found noise intensities of additive noise for which, despite the absence of conditions for the emergence of diffusion instability, the system transitions to an unstable state. The results of the numerical simulation of the evolution of the tested model demonstrate the process of forming spatial structures of Turing type under the influence of additive noise.
-
Modelling diameter measurement errors of a wide-aperture laser beam with flat profile
Computer Research and Modeling, 2015, v. 7, no. 1, pp. 113-124Views (last year): 3. Citations: 3 (RSCI).Work is devoted to modeling instrumental errors of a laser beam diameter measurement using a method based on a lambertian transmissive screen. Super-Lorenz distribution was used as a model of the beam. To determine the effect of each parameter on the measurement error were performed computational experiments, results of which were approximated by analytic functions. There were obtained the errors depending on relative beam size, spatial non-uniformity of the transmission screen, lens distortion, physical vignetting, beam tilt, CCD spatial resolution, ADC resolution of a camera. There was shown that the error can be less then 1 %.
-
On some properties of short-wave statistics of FOREX time series
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 657-669Views (last year): 10.Financial mathematics is one of the most natural applications for the statistical analysis of time series. Financial time series reflect simultaneous activity of a large number of different economic agents. Consequently, one expects that methods of statistical physics and the theory of random processes can be applied to them.
In this paper, we provide a statistical analysis of time series of the FOREX currency market. Of particular interest is the comparison of the time series behavior depending on the way time is measured: physical time versus trading time measured in the number of elementary price changes (ticks). The experimentally observed statistics of the time series under consideration (euro–dollar for the first half of 2007 and for 2009 and British pound – dollar for 2007) radically differs depending on the choice of the method of time measurement. When measuring time in ticks, the distribution of price increments can be well described by the normal distribution already on a scale of the order of ten ticks. At the same time, when price increments are measured in real physical time, the distribution of increments continues to differ radically from the normal up to scales of the order of minutes and even hours.
To explain this phenomenon, we investigate the statistical properties of elementary increments in price and time. In particular, we show that the distribution of time between ticks for all three time series has a long (1-2 orders of magnitude) power-law tails with exponential cutoff at large times. We obtained approximate expressions for the distributions of waiting times for all three cases. Other statistical characteristics of the time series (the distribution of elementary price changes, pair correlation functions for price increments and for waiting times) demonstrate fairly simple behavior. Thus, it is the anomalously wide distribution of the waiting times that plays the most important role in the deviation of the distribution of increments from the normal. As a result, we discuss the possibility of applying a continuous time random walk (CTRW) model to describe the FOREX time series.
-
Model for building of the radio environment map for cognitive communication system based on LTE
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 127-146The paper is devoted to the secondary use of spectrum in telecommunication networks. It is emphasized that one of the solutions to this problem is the use of cognitive radio technologies and dynamic spectrum access for the successful functioning of which a large amount of information is required, including the parameters of base stations and network subscribers. Storage and processing of information should be carried out using a radio environment map, which is a spatio-temporal database of all activity in the network and allows you to determine the frequencies available for use at a given time. The paper presents a two-level model for forming a map of the radio environment of a cellular communication system LTE, in which the local and global levels are highlighted, which is described by the following parameters: a set of frequencies, signal attenuation, signal propagation map, grid step, current time count. The key objects of the model are the base station and the subscriber unit. The main parameters of the base station include: name, identifier, cell coordinates, range number, radiation power, numbers of connected subscriber devices, dedicated resource blocks. For subscriber devices, the following parameters are used: name, identifier, location, current coordinates of the device cell, base station identifier, frequency range, numbers of resource blocks for communication with the station, radiation power, data transmission status, list of numbers of the nearest stations, schedules movement and communication sessions of devices. An algorithm for the implementation of the model is presented, taking into account the scenarios of movement and communication sessions of subscriber devices. A method for calculating a map of the radio environment at a point on a coordinate grid, taking into account losses during the propagation of radio signals from emitting devices, is presented. The software implementation of the model is performed using the MatLab package. The approaches are described that allow to increase the speed of its work. In the simulation, the choice of parameters was carried out taking into account the data of the existing communication systems and the economy of computing resources. The experimental results of the algorithm for the formation of a radio environment map are demonstrated, confirming the correctness of the developed model.
-
Proof of the connection between the Backman model with degenerate cost functions and the model of stable dynamics
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 335-342Since 1950s the field of city transport modelling has progressed rapidly. The first equilibrium distribution models of traffic flow appeared. The most popular model (which is still being widely used) was the Beckmann model, based on the two Wardrop principles. The core of the model could be briefly described as the search for the Nash equilibrium in a population demand game, in which losses of agents (drivers) are calculated based on the chosen path and demands of this path with correspondences being fixed. The demands (costs) of a path are calculated as the sum of the demands of different path segments (graph edges), that are included in the path. The costs of an edge (edge travel time) are determined by the amount of traffic on this edge (more traffic means larger travel time). The flow on a graph edge is determined by the sum of flows over all paths passing through the given edge. Thus, the cost of traveling along a path is determined not only by the choice of the path, but also by the paths other drivers have chosen. Thus, it is a standard game theory task. The way cost functions are constructed allows us to narrow the search for equilibrium to solving an optimization problem (game is potential in this case). If the cost functions are monotone and non-decreasing, the optimization problem is convex. Actually, different assumptions about the cost functions form different models. The most popular model is based on the BPR cost function. Such functions are massively used in calculations of real cities. However, in the beginning of the XXI century, Yu. E. Nesterov and A. de Palma showed that Beckmann-type models have serious weak points. Those could be fixed using the stable dynamics model, as it was called by the authors. The search for equilibrium here could be also reduced to an optimization problem, moreover, the problem of linear programming. In 2013, A.V.Gasnikov discovered that the stable dynamics model can be obtained by a passage to the limit in the Beckmann model. However, it was made only for several practically important, but still special cases. Generally, the question if this passage to the limit is possible remains open. In this paper, we provide the justification of the possibility of the above-mentioned passage to the limit in the general case, when the cost function for traveling along the edge as a function of the flow along the edge degenerates into a function equal to fixed costs until the capacity is reached and it is equal to plus infinity when the capacity is exceeded.
-
Computational algorithm for solving the nonlinear boundary-value problem of hydrogen permeability with dynamic boundary conditions and concentration-dependent diffusion coefficient
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1179-1193The article deals with the nonlinear boundary-value problem of hydrogen permeability corresponding to the following experiment. A membrane made of the target structural material heated to a sufficiently high temperature serves as the partition in the vacuum chamber. Degassing is performed in advance. A constant pressure of gaseous (molecular) hydrogen is built up at the inlet side. The penetrating flux is determined by mass-spectrometry in the vacuum maintained at the outlet side.
A linear model of dependence on concentration is adopted for the coefficient of dissolved atomic hydrogen diffusion in the bulk. The temperature dependence conforms to the Arrhenius law. The surface processes of dissolution and sorptiondesorption are taken into account in the form of nonlinear dynamic boundary conditions (differential equations for the dynamics of surface concentrations of atomic hydrogen). The characteristic mathematical feature of the boundary-value problem is that concentration time derivatives are included both in the diffusion equation and in the boundary conditions with quadratic nonlinearity. In terms of the general theory of functional differential equations, this leads to the so-called neutral type equations and requires a more complex mathematical apparatus. An iterative computational algorithm of second-(higher- )order accuracy is suggested for solving the corresponding nonlinear boundary-value problem based on explicit-implicit difference schemes. To avoid solving the nonlinear system of equations at every time step, we apply the explicit component of difference scheme to slower sub-processes.
The results of numerical modeling are presented to confirm the fitness of the model to experimental data. The degrees of impact of variations in hydrogen permeability parameters (“derivatives”) on the penetrating flux and the concentration distribution of H atoms through the sample thickness are determined. This knowledge is important, in particular, when designing protective structures against hydrogen embrittlement or membrane technologies for producing high-purity hydrogen. The computational algorithm enables using the model in the analysis of extreme regimes for structural materials (pressure drops, high temperatures, unsteady heating), identifying the limiting factors under specific operating conditions, and saving on costly experiments (especially in deuterium-tritium investigations).
-
Signal and noise calculation at Rician data analysis by means of combining maximum likelihood technique and method of moments
Computer Research and Modeling, 2018, v. 10, no. 4, pp. 511-523Views (last year): 11.The paper develops a new mathematical method of the joint signal and noise calculation at the Rice statistical distribution based on combing the maximum likelihood method and the method of moments. The calculation of the sough-for values of signal and noise is implemented by processing the sampled measurements of the analyzed Rician signal’s amplitude. The explicit equations’ system has been obtained for required signal and noise parameters and the results of its numerical solution are provided confirming the efficiency of the proposed technique. It has been shown that solving the two-parameter task by means of the proposed technique does not lead to the increase of the volume of demanded calculative resources if compared with solving the task in one-parameter approximation. An analytical solution of the task has been obtained for the particular case of small value of the signal-to-noise ratio. The paper presents the investigation of the dependence of the sought for parameters estimation accuracy and dispersion on the quantity of measurements in experimental sample. According to the results of numerical experiments, the dispersion values of the estimated sought-for signal and noise parameters calculated by means of the proposed technique change in inverse proportion to the quantity of measurements in a sample. There has been implemented a comparison of the accuracy of the soughtfor Rician parameters’ estimation by means of the proposed technique and by earlier developed version of the method of moments. The problem having been considered in the paper is meaningful for the purposes of Rician data processing, in particular, at the systems of magnetic-resonance visualization, in devices of ultrasonic visualization, at optical signals’ analysis in range-measuring systems, at radar signals’ analysis, as well as at solving many other scientific and applied tasks that are adequately described by the Rice statistical model.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




