Результаты поиска по 'distributed computing':
Найдено статей: 103
  1. Adamovskiy Y.R., Chertkov V.M., Bohush R.P.
    Model for building of the radio environment map for cognitive communication system based on LTE
    Computer Research and Modeling, 2022, v. 14, no. 1, pp. 127-146

    The paper is devoted to the secondary use of spectrum in telecommunication networks. It is emphasized that one of the solutions to this problem is the use of cognitive radio technologies and dynamic spectrum access for the successful functioning of which a large amount of information is required, including the parameters of base stations and network subscribers. Storage and processing of information should be carried out using a radio environment map, which is a spatio-temporal database of all activity in the network and allows you to determine the frequencies available for use at a given time. The paper presents a two-level model for forming a map of the radio environment of a cellular communication system LTE, in which the local and global levels are highlighted, which is described by the following parameters: a set of frequencies, signal attenuation, signal propagation map, grid step, current time count. The key objects of the model are the base station and the subscriber unit. The main parameters of the base station include: name, identifier, cell coordinates, range number, radiation power, numbers of connected subscriber devices, dedicated resource blocks. For subscriber devices, the following parameters are used: name, identifier, location, current coordinates of the device cell, base station identifier, frequency range, numbers of resource blocks for communication with the station, radiation power, data transmission status, list of numbers of the nearest stations, schedules movement and communication sessions of devices. An algorithm for the implementation of the model is presented, taking into account the scenarios of movement and communication sessions of subscriber devices. A method for calculating a map of the radio environment at a point on a coordinate grid, taking into account losses during the propagation of radio signals from emitting devices, is presented. The software implementation of the model is performed using the MatLab package. The approaches are described that allow to increase the speed of its work. In the simulation, the choice of parameters was carried out taking into account the data of the existing communication systems and the economy of computing resources. The experimental results of the algorithm for the formation of a radio environment map are demonstrated, confirming the correctness of the developed model.

  2. The article deals with the nonlinear boundary-value problem of hydrogen permeability corresponding to the following experiment. A membrane made of the target structural material heated to a sufficiently high temperature serves as the partition in the vacuum chamber. Degassing is performed in advance. A constant pressure of gaseous (molecular) hydrogen is built up at the inlet side. The penetrating flux is determined by mass-spectrometry in the vacuum maintained at the outlet side.

    A linear model of dependence on concentration is adopted for the coefficient of dissolved atomic hydrogen diffusion in the bulk. The temperature dependence conforms to the Arrhenius law. The surface processes of dissolution and sorptiondesorption are taken into account in the form of nonlinear dynamic boundary conditions (differential equations for the dynamics of surface concentrations of atomic hydrogen). The characteristic mathematical feature of the boundary-value problem is that concentration time derivatives are included both in the diffusion equation and in the boundary conditions with quadratic nonlinearity. In terms of the general theory of functional differential equations, this leads to the so-called neutral type equations and requires a more complex mathematical apparatus. An iterative computational algorithm of second-(higher- )order accuracy is suggested for solving the corresponding nonlinear boundary-value problem based on explicit-implicit difference schemes. To avoid solving the nonlinear system of equations at every time step, we apply the explicit component of difference scheme to slower sub-processes.

    The results of numerical modeling are presented to confirm the fitness of the model to experimental data. The degrees of impact of variations in hydrogen permeability parameters (“derivatives”) on the penetrating flux and the concentration distribution of H atoms through the sample thickness are determined. This knowledge is important, in particular, when designing protective structures against hydrogen embrittlement or membrane technologies for producing high-purity hydrogen. The computational algorithm enables using the model in the analysis of extreme regimes for structural materials (pressure drops, high temperatures, unsteady heating), identifying the limiting factors under specific operating conditions, and saving on costly experiments (especially in deuterium-tritium investigations).

  3. Aksenov A.A., Zhluktov S.V., Shmelev V.V., Shaporenko E.V., Shepelev S.F., Rogozhkin S.A., Krylov A.N.
    Numerical investigations of mixing non-isothermal streams of sodium coolant in T-branch
    Computer Research and Modeling, 2017, v. 9, no. 1, pp. 95-110

    Numerical investigation of mixing non-isothermal streams of sodium coolant in a T-branch is carried out in the FlowVision CFD software. This study is aimed at argumentation of applicability of different approaches to prediction of oscillating behavior of the flow in the mixing zone and simulation of temperature pulsations. The following approaches are considered: URANS (Unsteady Reynolds Averaged Navier Stokers), LES (Large Eddy Simulation) and quasi-DNS (Direct Numerical Simulation). One of the main tasks of the work is detection of the advantages and drawbacks of the aforementioned approaches.

    Numerical investigation of temperature pulsations, arising in the liquid and T-branch walls from the mixing of non-isothermal streams of sodium coolant was carried out within a mathematical model assuming that the flow is turbulent, the fluid density does not depend on pressure, and that heat exchange proceeds between the coolant and T-branch walls. Model LMS designed for modeling turbulent heat transfer was used in the calculations within URANS approach. The model allows calculation of the Prandtl number distribution over the computational domain.

    Preliminary study was dedicated to estimation of the influence of computational grid on the development of oscillating flow and character of temperature pulsation within the aforementioned approaches. The study resulted in formulation of criteria for grid generation for each approach.

    Then, calculations of three flow regimes have been carried out. The regimes differ by the ratios of the sodium mass flow rates and temperatures at the T-branch inlets. Each regime was calculated with use of the URANS, LES and quasi-DNS approaches.

    At the final stage of the work analytical comparison of numerical and experimental data was performed. Advantages and drawbacks of each approach to simulation of mixing non-isothermal streams of sodium coolant in the T-branch are revealed and formulated.

    It is shown that the URANS approach predicts the mean temperature distribution with a reasonable accuracy. It requires essentially less computational and time resources compared to the LES and DNS approaches. The drawback of this approach is that it does not reproduce pulsations of velocity, pressure and temperature.

    The LES and DNS approaches also predict the mean temperature with a reasonable accuracy. They provide oscillating solutions. The obtained amplitudes of the temperature pulsations exceed the experimental ones. The spectral power densities in the check points inside the sodium flow agree well with the experimental data. However, the expenses of the computational and time resources essentially exceed those for the URANS approach in the performed numerical experiments: 350 times for LES and 1500 times for ·DNS.

    Views (last year): 3.
  4. Kotliarova E.V., Severilov P.A., Ivchenkov Y.P., Mokrov P.V., Chekanov M.O., Gasnikova E.V., Sharovatova Y.I.
    Speeding up the two-stage simultaneous traffic assignment model
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 343-355

    This article describes possible improvements for the simultaneous multi-stage transport model code for speeding up computations and improving the model detailing. The model consists of two blocks, where the first block is intended to calculate the correspondence matrix, and the second block computes the equilibrium distribution of traffic flows along the routes. The first block uses a matrix of transport costs that calculates a matrix of correspondences. It describes the costs (time in our case) of travel from one area to another. The second block presents how exactly the drivers (agents) are distributed along the possible paths. So, knowing the distribution of the flows along the paths, it is possible to calculate the cost matrix. Equilibrium in a two-stage traffic flow model is a fixed point of a sequence of the two described models. Thus, in this paper we report an attempt to influence the calculation speed of Dijkstra’s algorithm part of the model. It is used to calculate the shortest path from one point to another, which should be re-calculated after each iteration of the flow distribution part. We also study and implement the road pricing in the model code, as well as we replace the Sinkhorn algorithm in the calculation of the correspondence matrix part with its faster implementation. In the beginning of the paper, we provide a short theoretical overview of the transport modelling motivation; we discuss current approaches to the modelling and provide an example for demonstration of how the whole cycle of multi-stage transport modelling works.

  5. Nedbailo Y.A., Surchenko A.V., Bychkov I.N.
    Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656

    Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.

    The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.

    The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.

    The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.

    All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.

  6. Sobolev O.V., Lunina N.L., Lunin V.Yu.
    The use of cluster analysis methods for the study of a set of feasible solutions of the phase problem in biological crystallography
    Computer Research and Modeling, 2010, v. 2, no. 1, pp. 91-101

    X-ray diffraction experiment allows determining of magnitudes of complex coefficients in the decomposition of the studied electron density distribution into Fourier series. The determination of the lost in the experiment phase values poses the central problem of the method, namely the phase problem. Some methods for solving of the phase problem result in a set of feasible solutions. Cluster analysis method may be used to investigate the composition of this set and to extract one or several typical solutions. An essential feature of the approach is the estimation of the closeness of two solutions by the map correlation between two aligned Fourier syntheses calculated with the use of phase sets under comparison. An interactive computer program ClanGR was designed to perform this analysis.

    Views (last year): 2.
  7. Borisov A.V., Krasnobaeva L.A., Shapovalov A.V.
    Influence of diffusion and convection on the chemostat dynamics
    Computer Research and Modeling, 2012, v. 4, no. 1, pp. 121-129

    Population dynamics is considered in a modified chemostat model including diffusion, chemotaxis, and nonlocal competitive losses. To account for influence of the external environment on the population of the ecosystem, a random parameter is included into the model equations. Computer simulations reveal three dynamic modes depending on system parameters: the transition from initial state to a spatially homogeneous steady state, to a spatially inhomogeneous distribution of population density, and elimination of population density.

    Views (last year): 1.
  8. Belov S.D., Deng Z., Li W., Lin T., Pelevanyuk I., Trofimov V.V., Uzhinskiy A.V., Yan T., Yan X., Zhang G., Zhao X., Zhang X., Zhemchugov A.S.
    BES-III distributed computing status
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 469-473

    The BES-III experiment at the IHEP CAS, Beijing, is running at the high-luminosity e+e- collider BEPC-II to study physics of charm quarks and tau leptons. The world largest samples of J/psi and psi' events are already collected, a number of unique data samples in the energy range 2.5–4.6 GeV have been taken. The data volume is expected to increase by an order of magnitude in the coming years. This requires to move from a centralized computing system to a distributed computing environment, thus allowing the use of computing resources from remote sites — members of the BES-III Collaboration. In this report the general information, latest results and development plans of the BES-III distributed computing system are presented.

    Views (last year): 3.
  9. Epifanov A.V., Tsybulin V.G.
    Regarding the dynamics of cosymmetric predator – prey systems
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 799-813

    To study nonlinear effects of biological species interactions numerical-analytical approach is being developed. The approach is based on the cosymmetry theory accounting for the phenomenon of the emergence of a continuous family of solutions to differential equations where each solution can be obtained from the appropriate initial state. In problems of mathematical ecology the onset of cosymmetry is usually connected with a number of relationships between the parameters of the system. When the relationships collapse families vanish, we get a finite number of isolated solutions instead of a continuum of solutions and transient process can be long-term, dynamics taking place in a neighborhood of a family that has vanished due to cosymmetry collapse.

    We consider a model for spatiotemporal competition of predators or prey with an account for directed migration, Holling type II functional response and nonlinear prey growth function permitting Alley effect. We found out the conditions on system parameters under which there is linear with respect to population densities cosymmetry. It is demonstated that cosymmetry exists for any resource function in case of heterogeneous habitat. Numerical experiment in MATLAB is applied to compute steady states and oscillatory regimes in case of spatial heterogeneity.

    The dynamics of three population interactions (two predators and a prey, two prey and a predator) are considered. The onset of families of stationary distributions and limit cycle branching out of equlibria of a family that lose stability are investigated in case of homogeneous habitat. The study of the system for two prey and a predator gave a wonderful result of species coexistence. We have found out parameter regions where three families of stable solutions can be realized: coexistence of two prey in absence of a predator, stationary and oscillatory distributions of three coexisting species. Cosymmetry collapse is analyzed and long-term transient dynamics leading to solutions with the exclusion of one of prey or extinction of a predator is established in the numerical experiment.

    Views (last year): 12. Citations: 3 (RSCI).
  10. Usanov M.S., Kulberg N.S., Yakovleva T.V., Morozov S.P.
    Determination of CT dose by means of noise analysis
    Computer Research and Modeling, 2018, v. 10, no. 4, pp. 525-533

    The article deals with the process of creating an effective algorithm for determining the amount of emitted quanta from an X-ray tube in computer tomography (CT) studies. An analysis of domestic and foreign literature showed that most of the work in the field of radiometry and radiography takes the tabulated values of X-ray absorption coefficients into account, while individual dose factors are not taken into account at all since many studies are lacking the Dose Report. Instead, an average value is used to simplify the calculation of statistics. In this regard, it was decided to develop a method to detect the amount of ionizing quanta by analyzing the noise of CT data. As the basis of the algorithm, we used Poisson and Gauss distribution mathematical model of owns’ design of logarithmic value. The resulting mathematical model was tested on the CT data of a calibration phantom consisting of three plastic cylinders filled with water, the X-ray absorption coefficient of which is known from the table values. The data were obtained from several CT devices from different manufacturers (Siemens, Toshiba, GE, Phillips). The developed algorithm made it possible to calculate the number of emitted X-ray quanta per unit time. These data, taking into account the noise level and the radiuses of the cylinders, were converted to X-ray absorption values, after which a comparison was made with tabulated values. As a result of this operation, the algorithm used with CT data of various configurations, experimental data were obtained, consistent with the theoretical part and the mathematical model. The results showed good accuracy of the algorithm and mathematical apparatus, which shows reliability of the obtained data. This mathematical model is already used in the noise reduction program of the CT of own design, where it participates as a method of creating a dynamic threshold of noise reduction. At the moment, the algorithm is being processed to work with real data from computer tomography of patients.

    Views (last year): 23. Citations: 1 (RSCI).
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"