All issues
- 2026 Vol. 18
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Model for building of the radio environment map for cognitive communication system based on LTE
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 127-146The paper is devoted to the secondary use of spectrum in telecommunication networks. It is emphasized that one of the solutions to this problem is the use of cognitive radio technologies and dynamic spectrum access for the successful functioning of which a large amount of information is required, including the parameters of base stations and network subscribers. Storage and processing of information should be carried out using a radio environment map, which is a spatio-temporal database of all activity in the network and allows you to determine the frequencies available for use at a given time. The paper presents a two-level model for forming a map of the radio environment of a cellular communication system LTE, in which the local and global levels are highlighted, which is described by the following parameters: a set of frequencies, signal attenuation, signal propagation map, grid step, current time count. The key objects of the model are the base station and the subscriber unit. The main parameters of the base station include: name, identifier, cell coordinates, range number, radiation power, numbers of connected subscriber devices, dedicated resource blocks. For subscriber devices, the following parameters are used: name, identifier, location, current coordinates of the device cell, base station identifier, frequency range, numbers of resource blocks for communication with the station, radiation power, data transmission status, list of numbers of the nearest stations, schedules movement and communication sessions of devices. An algorithm for the implementation of the model is presented, taking into account the scenarios of movement and communication sessions of subscriber devices. A method for calculating a map of the radio environment at a point on a coordinate grid, taking into account losses during the propagation of radio signals from emitting devices, is presented. The software implementation of the model is performed using the MatLab package. The approaches are described that allow to increase the speed of its work. In the simulation, the choice of parameters was carried out taking into account the data of the existing communication systems and the economy of computing resources. The experimental results of the algorithm for the formation of a radio environment map are demonstrated, confirming the correctness of the developed model.
-
Proof of the connection between the Backman model with degenerate cost functions and the model of stable dynamics
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 335-342Since 1950s the field of city transport modelling has progressed rapidly. The first equilibrium distribution models of traffic flow appeared. The most popular model (which is still being widely used) was the Beckmann model, based on the two Wardrop principles. The core of the model could be briefly described as the search for the Nash equilibrium in a population demand game, in which losses of agents (drivers) are calculated based on the chosen path and demands of this path with correspondences being fixed. The demands (costs) of a path are calculated as the sum of the demands of different path segments (graph edges), that are included in the path. The costs of an edge (edge travel time) are determined by the amount of traffic on this edge (more traffic means larger travel time). The flow on a graph edge is determined by the sum of flows over all paths passing through the given edge. Thus, the cost of traveling along a path is determined not only by the choice of the path, but also by the paths other drivers have chosen. Thus, it is a standard game theory task. The way cost functions are constructed allows us to narrow the search for equilibrium to solving an optimization problem (game is potential in this case). If the cost functions are monotone and non-decreasing, the optimization problem is convex. Actually, different assumptions about the cost functions form different models. The most popular model is based on the BPR cost function. Such functions are massively used in calculations of real cities. However, in the beginning of the XXI century, Yu. E. Nesterov and A. de Palma showed that Beckmann-type models have serious weak points. Those could be fixed using the stable dynamics model, as it was called by the authors. The search for equilibrium here could be also reduced to an optimization problem, moreover, the problem of linear programming. In 2013, A.V.Gasnikov discovered that the stable dynamics model can be obtained by a passage to the limit in the Beckmann model. However, it was made only for several practically important, but still special cases. Generally, the question if this passage to the limit is possible remains open. In this paper, we provide the justification of the possibility of the above-mentioned passage to the limit in the general case, when the cost function for traveling along the edge as a function of the flow along the edge degenerates into a function equal to fixed costs until the capacity is reached and it is equal to plus infinity when the capacity is exceeded.
-
Numerical modeling of raw atomization and vaporization by flow of heat carrier gas in furnace technical carbon production into FlowVision
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 921-939Technical carbon (soot) is a product obtained by thermal decomposition (pyrolysis) of hydrocarbons (usually oil) in a stream of heat carrier gas. Technical carbon is widely used as a reinforcing component in the production of rubber and plastic masses. Tire production uses 70% of all carbon produced. In furnace carbon production, the liquid hydrocarbon feedstock is injected into the natural gas combustion product stream through nozzles. The raw material is atomized and vaporized with further pyrolysis. It is important for the raw material to be completely evaporated before the pyrolysis process starts, otherwise coke, that contaminates the product, will be produced. It is impossible to operate without mathematical modeling of the process itself in order to improve the carbon production technology, in particular, to provide the complete evaporation of the raw material prior to the pyrolysis process. Mathematical modelling is the most important way to obtain the most complete and detailed information about the peculiarities of reactor operation.
A three-dimensional mathematical model and calculation method for raw material atomization and evaporation in the thermal gas flow are being developed in the FlowVision software package PC. Water is selected as a raw material to work out the modeling technique. The working substances in the reactor chamber are the combustion products of natural gas. The motion of raw material droplets and evaporation in the gas stream are modeled in the framework of the Eulerian approach of interaction between dispersed and continuous media. The simulation results of raw materials atomization and evaporation in a real reactor for technical carbon production are presented. Numerical method allows to determine an important atomization characteristic: average Sauter diameter. That parameter could be defined from distribution of droplets of raw material at each time of spray forming.
-
Computational algorithm for solving the nonlinear boundary-value problem of hydrogen permeability with dynamic boundary conditions and concentration-dependent diffusion coefficient
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1179-1193The article deals with the nonlinear boundary-value problem of hydrogen permeability corresponding to the following experiment. A membrane made of the target structural material heated to a sufficiently high temperature serves as the partition in the vacuum chamber. Degassing is performed in advance. A constant pressure of gaseous (molecular) hydrogen is built up at the inlet side. The penetrating flux is determined by mass-spectrometry in the vacuum maintained at the outlet side.
A linear model of dependence on concentration is adopted for the coefficient of dissolved atomic hydrogen diffusion in the bulk. The temperature dependence conforms to the Arrhenius law. The surface processes of dissolution and sorptiondesorption are taken into account in the form of nonlinear dynamic boundary conditions (differential equations for the dynamics of surface concentrations of atomic hydrogen). The characteristic mathematical feature of the boundary-value problem is that concentration time derivatives are included both in the diffusion equation and in the boundary conditions with quadratic nonlinearity. In terms of the general theory of functional differential equations, this leads to the so-called neutral type equations and requires a more complex mathematical apparatus. An iterative computational algorithm of second-(higher- )order accuracy is suggested for solving the corresponding nonlinear boundary-value problem based on explicit-implicit difference schemes. To avoid solving the nonlinear system of equations at every time step, we apply the explicit component of difference scheme to slower sub-processes.
The results of numerical modeling are presented to confirm the fitness of the model to experimental data. The degrees of impact of variations in hydrogen permeability parameters (“derivatives”) on the penetrating flux and the concentration distribution of H atoms through the sample thickness are determined. This knowledge is important, in particular, when designing protective structures against hydrogen embrittlement or membrane technologies for producing high-purity hydrogen. The computational algorithm enables using the model in the analysis of extreme regimes for structural materials (pressure drops, high temperatures, unsteady heating), identifying the limiting factors under specific operating conditions, and saving on costly experiments (especially in deuterium-tritium investigations).
-
Classifier size optimisation in segmentation of three-dimensional point images of wood vegetation
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 665-675The advent of laser scanning technologies has revolutionized forestry. Their use made it possible to switch from studying woodlands using manual measurements to computer analysis of stereo point images called point clouds.
Automatic calculation of some tree parameters (such as trunk diameter) using a point cloud requires the removal of foliage points. To perform this operation, a preliminary segmentation of the stereo image into the “foliage” and “trunk” classes is required. The solution to this problem often involves the use of machine learning methods.
One of the most popular classifiers used for segmentation of stereo images of trees is a random forest. This classifier is quite demanding on the amount of memory. At the same time, the size of the machine learning model can be critical if it needs to be sent by wire, which is required, for example, when performing distributed learning. In this paper, the goal is to find a classifier that would be less demanding in terms of memory, but at the same time would have comparable segmentation accuracy. The search is performed among classifiers such as logistic regression, naive Bayes classifier, and decision tree. In addition, a method for segmentation refinement performed by a decision tree using logistic regression is being investigated.
The experiments were conducted on data from the collection of the University of Heidelberg. The collection contains hand-marked stereo images of trees of various species, both coniferous and deciduous, typical of the forests of Central Europe.
It has been shown that classification using a decision tree, adjusted using logistic regression, is able to produce a result that is only slightly inferior to the result of a random forest in accuracy, while spending less time and RAM. The difference in balanced accuracy is no more than one percent on all the clouds considered, while the total size and inference time of the decision tree and logistic regression classifiers is an order of magnitude smaller than of the random forest classifier.
-
Research on the achievability of a goal in a medical quest
Computer Research and Modeling, 2025, v. 17, no. 6, pp. 1149-1179The work presents an experimental study of the tree structure that occurs during a medical examination. At each meeting with a medical specialist, the patient receives a certain number of areas for consulting other specialists or for tests. A tree of directions arises, each branch of which the patient should pass. Depending on the branching of the tree, it can be as final — and in this case the examination can be completed — and endless when the patient’s goal cannot be achieved. In the work both experimentally and theoretically studied the critical properties of the transition of the system from the forest of the final trees to the forest endless, depending on the probabilistic characteristics of the tree.
For the description, a model is proposed in which a discrete function of the probability of the number of branches on the node repeats the dynamics of a continuous gaussian distribution. The characteristics of the distribution of the Gauss (mathematical expectation of $x_0$, the average quadratic deviation of $\sigma$) are model parameters. In the selected setting, the task refers to the problems of branching random processes (BRP) in the heterogeneous model of Galton – Watson.
Experimental study is carried out by numerical modeling on the final grilles. A phase diagram was built, the boundaries of areas of various phases are determined. A comparison was made with the phase diagram obtained from theoretical criteria for macrosystems, and an adequate correspondence was established. It is shown that on the final grilles the transition is blurry.
The description of the blurry phase transition was carried out using two approaches. In the first, standard approach, the transition is described using the so-called inclusion function, which makes the meaning of the share of one of the phases in the general set. It was established that such an approach in this system is ineffective, since the found position of the conditional boundary of the blurred transition is determined only by the size of the chosen experimental lattice and does not bear objective meaning.
The second, original approach is proposed, based on the introduction of an parameter of order equal to the reverse average tree height, and the analysis of its behavior. It was established that the dynamics of such an order parameter in the $\sigma = \text{const}$ section with very small differences has the type of distribution of Fermi – Dirac ($\sigma$ performs the same function as the temperature for the distribution of Fermi – Dirac, $x_0$ — energy function). An empirical expression has been selected for the order parameter, an analogue of the chemical potential is introduced and calculated, which makes sense of the characteristic scale of the order parameter — that is, the values of $x_0$, in which the order can be considered a disorder. This criterion is the basis for determining the boundary of the conditional transition in this approach. It was established that this boundary corresponds to the average height of a tree equal to two generations. Based on the found properties, recommendations for medical institutions are proposed to control the provision of limb of the path of patients.
The model discussed and its description using conditionally-infinite trees have applications to many hierarchical systems. These systems include: internet routing networks, bureaucratic networks, trade and logistics networks, citation networks, game strategies, population dynamics problems, and others.
-
Mathematical model of hydride phase change in a symmetrical powder particle
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 569-584Views (last year): 2. Citations: 2 (RSCI).In the paper we construct the model of phase change. Process of hydriding / dehydriding is taken as an example. A single powder particle is considered under the assumption about its symmetry. A ball, a cylinder, and a flat plate are examples of such symmetrical shapes. The model desribes both the "shrinking core"(when the skin of the new phase appears on the surface of the particle) and the "nucleation and growth"(when the skin does not appear till complete vanishing of the old phase) scenarios. The model is the non-classical boundary-value problem with the free boundary and nonlinear Neumann boundary condition. The symmetry assumptions allow to reduce the problem to the single spatial variable. The model was tested on the series of experimental data. We show that the particle shape’s influence on the kinetics is insignificant. We also show that a set of particles of different shapes with size distribution can be approxomated by the single particle of the "average" size and of a simple shape; this justifies using single particle approximation and simple shapes in mathematical models.
-
Calculation of spatial distribution of differently oriented LEDs
Computer Research and Modeling, 2014, v. 6, no. 4, pp. 577-584Views (last year): 3. Citations: 2 (RSCI).New method for calculation of spatial light distribution of differently oriented LEDs is proposed. The main idea is combination of coordinate systems associated with these light sources. Unlike other conventional approaches, this method can be applied to the emitters with light distribution with arbitrary symmetry or without symmetry at all.
-
Modeling of plankton community state with density-dependent death and spatial activity of zooplankton
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 549-560Views (last year): 6.A vertically distributed three-component model of marine ecosystem is considered. State of the plankton community with nutrients is analyzed under the active movement of zooplankton in a vertical column of water. The necessary conditions of the Turing instability in the vicinity of the spatially homogeneous equilibrium are obtained. Stability of the spatially homogeneous equilibrium, the Turing instability and the oscillatory instability are examined depending on the biological characteristics of zooplankton and spatial movement of plankton. It is shown that at low values of zooplankton grazing rate and intratrophic interaction rate the system is Turing instable when the taxis rate is low. Stabilization occurs either through increased decline of zooplankton either by increasing the phytoplankton diffusion. With the increasing rate of consumption of phytoplankton range of parameters that determine the stability is reduced. A type of instability depends on the phytoplankton diffusion. For large values of diffusion oscillatory instability is observed, with a decrease in the phytoplankton diffusion zone of Turing instability is increases. In general, if zooplankton grazing rate is faster than phytoplankton growth rate the spatially homogeneous equilibrium is Turing instable or oscillatory instable. Stability is observed only at high speeds of zooplankton departure or its active movements. With the increase in zooplankton search activity spatial distribution of populations becomes more uniform, increasing the rate of diffusion leads to non-uniform spatial distribution. However, under diffusion the total number of the population is stabilized when the zooplankton grazing rate above the rate of phytoplankton growth. In general, at low rate of phytoplankton consumption the spatial structures formation is possible at low rates of zooplankton decline and diffusion of all the plankton community. With the increase in phytoplankton predation rate the phytoplankton diffusion and zooplankton spatial movement has essential effect on the spatial instability.
-
Numerical investigations of mixing non-isothermal streams of sodium coolant in T-branch
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 95-110Views (last year): 3.Numerical investigation of mixing non-isothermal streams of sodium coolant in a T-branch is carried out in the FlowVision CFD software. This study is aimed at argumentation of applicability of different approaches to prediction of oscillating behavior of the flow in the mixing zone and simulation of temperature pulsations. The following approaches are considered: URANS (Unsteady Reynolds Averaged Navier Stokers), LES (Large Eddy Simulation) and quasi-DNS (Direct Numerical Simulation). One of the main tasks of the work is detection of the advantages and drawbacks of the aforementioned approaches.
Numerical investigation of temperature pulsations, arising in the liquid and T-branch walls from the mixing of non-isothermal streams of sodium coolant was carried out within a mathematical model assuming that the flow is turbulent, the fluid density does not depend on pressure, and that heat exchange proceeds between the coolant and T-branch walls. Model LMS designed for modeling turbulent heat transfer was used in the calculations within URANS approach. The model allows calculation of the Prandtl number distribution over the computational domain.
Preliminary study was dedicated to estimation of the influence of computational grid on the development of oscillating flow and character of temperature pulsation within the aforementioned approaches. The study resulted in formulation of criteria for grid generation for each approach.
Then, calculations of three flow regimes have been carried out. The regimes differ by the ratios of the sodium mass flow rates and temperatures at the T-branch inlets. Each regime was calculated with use of the URANS, LES and quasi-DNS approaches.
At the final stage of the work analytical comparison of numerical and experimental data was performed. Advantages and drawbacks of each approach to simulation of mixing non-isothermal streams of sodium coolant in the T-branch are revealed and formulated.
It is shown that the URANS approach predicts the mean temperature distribution with a reasonable accuracy. It requires essentially less computational and time resources compared to the LES and DNS approaches. The drawback of this approach is that it does not reproduce pulsations of velocity, pressure and temperature.
The LES and DNS approaches also predict the mean temperature with a reasonable accuracy. They provide oscillating solutions. The obtained amplitudes of the temperature pulsations exceed the experimental ones. The spectral power densities in the check points inside the sodium flow agree well with the experimental data. However, the expenses of the computational and time resources essentially exceed those for the URANS approach in the performed numerical experiments: 350 times for LES and 1500 times for ·DNS.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




