Результаты поиска по 'distributed computing':
Найдено статей: 99
  1. Nedbailo Y.A., Surchenko A.V., Bychkov I.N.
    Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656

    Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.

    The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.

    The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.

    The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.

    All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.

  2. Sobolev O.V., Lunina N.L., Lunin V.Yu.
    The use of cluster analysis methods for the study of a set of feasible solutions of the phase problem in biological crystallography
    Computer Research and Modeling, 2010, v. 2, no. 1, pp. 91-101

    X-ray diffraction experiment allows determining of magnitudes of complex coefficients in the decomposition of the studied electron density distribution into Fourier series. The determination of the lost in the experiment phase values poses the central problem of the method, namely the phase problem. Some methods for solving of the phase problem result in a set of feasible solutions. Cluster analysis method may be used to investigate the composition of this set and to extract one or several typical solutions. An essential feature of the approach is the estimation of the closeness of two solutions by the map correlation between two aligned Fourier syntheses calculated with the use of phase sets under comparison. An interactive computer program ClanGR was designed to perform this analysis.

    Views (last year): 2.
  3. Borisov A.V., Krasnobaeva L.A., Shapovalov A.V.
    Influence of diffusion and convection on the chemostat dynamics
    Computer Research and Modeling, 2012, v. 4, no. 1, pp. 121-129

    Population dynamics is considered in a modified chemostat model including diffusion, chemotaxis, and nonlocal competitive losses. To account for influence of the external environment on the population of the ecosystem, a random parameter is included into the model equations. Computer simulations reveal three dynamic modes depending on system parameters: the transition from initial state to a spatially homogeneous steady state, to a spatially inhomogeneous distribution of population density, and elimination of population density.

    Views (last year): 1.
  4. Belov S.D., Deng Z., Li W., Lin T., Pelevanyuk I., Trofimov V.V., Uzhinskiy A.V., Yan T., Yan X., Zhang G., Zhao X., Zhang X., Zhemchugov A.S.
    BES-III distributed computing status
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 469-473

    The BES-III experiment at the IHEP CAS, Beijing, is running at the high-luminosity e+e- collider BEPC-II to study physics of charm quarks and tau leptons. The world largest samples of J/psi and psi' events are already collected, a number of unique data samples in the energy range 2.5–4.6 GeV have been taken. The data volume is expected to increase by an order of magnitude in the coming years. This requires to move from a centralized computing system to a distributed computing environment, thus allowing the use of computing resources from remote sites — members of the BES-III Collaboration. In this report the general information, latest results and development plans of the BES-III distributed computing system are presented.

    Views (last year): 3.
  5. Epifanov A.V., Tsybulin V.G.
    Regarding the dynamics of cosymmetric predator – prey systems
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 799-813

    To study nonlinear effects of biological species interactions numerical-analytical approach is being developed. The approach is based on the cosymmetry theory accounting for the phenomenon of the emergence of a continuous family of solutions to differential equations where each solution can be obtained from the appropriate initial state. In problems of mathematical ecology the onset of cosymmetry is usually connected with a number of relationships between the parameters of the system. When the relationships collapse families vanish, we get a finite number of isolated solutions instead of a continuum of solutions and transient process can be long-term, dynamics taking place in a neighborhood of a family that has vanished due to cosymmetry collapse.

    We consider a model for spatiotemporal competition of predators or prey with an account for directed migration, Holling type II functional response and nonlinear prey growth function permitting Alley effect. We found out the conditions on system parameters under which there is linear with respect to population densities cosymmetry. It is demonstated that cosymmetry exists for any resource function in case of heterogeneous habitat. Numerical experiment in MATLAB is applied to compute steady states and oscillatory regimes in case of spatial heterogeneity.

    The dynamics of three population interactions (two predators and a prey, two prey and a predator) are considered. The onset of families of stationary distributions and limit cycle branching out of equlibria of a family that lose stability are investigated in case of homogeneous habitat. The study of the system for two prey and a predator gave a wonderful result of species coexistence. We have found out parameter regions where three families of stable solutions can be realized: coexistence of two prey in absence of a predator, stationary and oscillatory distributions of three coexisting species. Cosymmetry collapse is analyzed and long-term transient dynamics leading to solutions with the exclusion of one of prey or extinction of a predator is established in the numerical experiment.

    Views (last year): 12. Citations: 3 (RSCI).
  6. Usanov M.S., Kulberg N.S., Yakovleva T.V., Morozov S.P.
    Determination of CT dose by means of noise analysis
    Computer Research and Modeling, 2018, v. 10, no. 4, pp. 525-533

    The article deals with the process of creating an effective algorithm for determining the amount of emitted quanta from an X-ray tube in computer tomography (CT) studies. An analysis of domestic and foreign literature showed that most of the work in the field of radiometry and radiography takes the tabulated values of X-ray absorption coefficients into account, while individual dose factors are not taken into account at all since many studies are lacking the Dose Report. Instead, an average value is used to simplify the calculation of statistics. In this regard, it was decided to develop a method to detect the amount of ionizing quanta by analyzing the noise of CT data. As the basis of the algorithm, we used Poisson and Gauss distribution mathematical model of owns’ design of logarithmic value. The resulting mathematical model was tested on the CT data of a calibration phantom consisting of three plastic cylinders filled with water, the X-ray absorption coefficient of which is known from the table values. The data were obtained from several CT devices from different manufacturers (Siemens, Toshiba, GE, Phillips). The developed algorithm made it possible to calculate the number of emitted X-ray quanta per unit time. These data, taking into account the noise level and the radiuses of the cylinders, were converted to X-ray absorption values, after which a comparison was made with tabulated values. As a result of this operation, the algorithm used with CT data of various configurations, experimental data were obtained, consistent with the theoretical part and the mathematical model. The results showed good accuracy of the algorithm and mathematical apparatus, which shows reliability of the obtained data. This mathematical model is already used in the noise reduction program of the CT of own design, where it participates as a method of creating a dynamic threshold of noise reduction. At the moment, the algorithm is being processed to work with real data from computer tomography of patients.

    Views (last year): 23. Citations: 1 (RSCI).
  7. Vornovskikh P.A., Kim A., Prokhorov I.V.
    The applicability of the approximation of single scattering in pulsed sensing of an inhomogeneous medium
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1063-1079

    The mathematical model based on the linear integro-differential Boltzmann equation is considered in this article. The model describes the radiation transfer in the scattering medium irradiated by a point source. The inverse problem for the transfer equation is defined. This problem consists of determining the scattering coefficient from the time-angular distribution of the radiation flux density at a given point in space. The Neumann series representation for solving the radiation transfer equation is analyzed in the study of the inverse problem. The zero member of the series describes the unscattered radiation, the first member of the series describes a single-scattered field, the remaining members of the series describe a multiple-scattered field. When calculating the approximate solution of the radiation transfer equation, the single scattering approximation is widespread to calculated an approximate solution of the equation for regions with a small optical thickness and a low level of scattering. An analytical formula is obtained for finding the scattering coefficient by using this approximation for problem with additional restrictions on the initial data. To verify the adequacy of the obtained formula the Monte Carlo weighted method for solving the transfer equation is constructed and software implemented taking into account multiple scattering in the medium and the space-time singularity of the radiation source. As applied to the problems of high-frequency acoustic sensing in the ocean, computational experiments were carried out. The application of the single scattering approximation is justified, at least, at a sensing range of about one hundred meters and the double and triple scattered fields make the main impact on the formula error. For larger regions, the single scattering approximation gives at the best only a qualitative evaluation of the medium structure, sometimes it even does not allow to determine the order of the parameters quantitative characteristics of the interaction of radiation with matter.

  8. Skorik S.N., Pirau V.V., Sedov S.A., Dvinskikh D.M.
    Comparsion of stochastic approximation and sample average approximation for saddle point problem with bilinear coupling term
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 381-391

    Stochastic optimization is a current area of research due to significant advances in machine learning and their applications to everyday problems. In this paper, we consider two fundamentally different methods for solving the problem of stochastic optimization — online and offline algorithms. The corresponding algorithms have their qualitative advantages over each other. So, for offline algorithms, it is required to solve an auxiliary problem with high accuracy. However, this can be done in a distributed manner, and this opens up fundamental possibilities such as, for example, the construction of a dual problem. Despite this, both online and offline algorithms pursue a common goal — solving the stochastic optimization problem with a given accuracy. This is reflected in the comparison of the computational complexity of the described algorithms, which is demonstrated in this paper.

    The comparison of the described methods is carried out for two types of stochastic problems — convex optimization and saddles. For problems of stochastic convex optimization, the existing solutions make it possible to compare online and offline algorithms in some detail. In particular, for strongly convex problems, the computational complexity of the algorithms is the same, and the condition of strong convexity can be weakened to the condition of $\gamma$-growth of the objective function. From this point of view, saddle point problems are much less studied. Nevertheless, existing solutions allow us to outline the main directions of research. Thus, significant progress has been made for bilinear saddle point problems using online algorithms. Offline algorithms are represented by just one study. In this paper, this example demonstrates the similarity of both algorithms with convex optimization. The issue of the accuracy of solving the auxiliary problem for saddles was also worked out. On the other hand, the saddle point problem of stochastic optimization generalizes the convex one, that is, it is its logical continuation. This is manifested in the fact that existing results from convex optimization can be transferred to saddles. In this paper, such a transfer is carried out for the results of the online algorithm in the convex case, when the objective function satisfies the $\gamma$-growth condition.

  9. Sukhinov A.I., Chistyakov A.E., Semenyakina A.A., Nikitina A.V.
    Numerical modeling of ecologic situation of the Azov Sea with using schemes of increased order of accuracy on multiprocessor computer system
    Computer Research and Modeling, 2016, v. 8, no. 1, pp. 151-168

    The article covered results of three-dimensional modeling of ecologic situation of shallow water on the example of the Azov Sea with using schemes of increased order of accuracy on multiprocessor computer system of Southern Federal University. Discrete analogs of convective and diffusive transfer operators of the fourth order of accuracy in the case of partial occupancy of cells were constructed and studied. The developed scheme of the high (fourth) order of accuracy were used for solving problems of aquatic ecology and modeling spatial distribution of polluting nutrients, which caused growth of phytoplankton, many species of which are toxic and harmful. The use of schemes of the high order of accuracy are improved the quality of input data and decreased the error in solutions of model tasks of aquatic ecology. Numerical experiments were conducted for the problem of transportation of substances on the basis of the schemes of the second and fourth orders of accuracy. They’re showed that the accuracy was increased in 48.7 times for diffusion-convection problem. The mathematical algorithm was proposed and numerically implemented, which designed to restore the bottom topography of shallow water on the basis of hydrographic data (water depth at individual points or contour level). The map of bottom relief of the Azov Sea was generated with using this algorithm. It’s used to build fields of currents calculated on the basis of hydrodynamic model. The fields of water flow currents were used as input data of the aquatic ecology models. The library of double-layered iterative methods was developed for solving of nine-diagonal difference equations. It occurs in discretization of model tasks of challenges of pollutants concentration, plankton and fish on multiprocessor computer system. It improved the precision of the calculated data and gave the possibility to obtain operational forecasts of changes in ecologic situation of shallow water in short time intervals.

    Views (last year): 4. Citations: 31 (RSCI).
  10. Kireenkov A.A., Zhavoronok S.I., Nushtaev D.V.
    On tire models accounting for both deformed state and coupled dry friction in a contact spot
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 163-173

    A proposed approximate model of the rolling of a deforming wheel with a pneumatic tire allows one to account as well forces in tires as the effect of the dry friction on the stability of the rolling upon the shimmy phenomenon prognosis. The model os based on the theory of the dry friction with combined kinematics of relative motion of interacting bodies, i. e. under the condition of simultaneous rolling, sliding, and spinning with accounting for the real shape of a contact spot and contact pressure distribution. The resultant vector and couple of the forces generated by the contact interaction with dry friction are defined by integration over the contact area, whereas the static contact pressure under the conditions of vanishing velocity of sliding and angular velocity of spinning is computed after the finite-element solution for the statical contact of a pneumatic with a rigid road with accounting forreal internal structure and properties of a tire. The solid finite element model of a typical tire with longitudinal thread is used below as a background. Given constant boost pressure, vertical load and static friction factor 0.5 the numerical solution is constructed, as well as the appropriate solutions for lateral and torsional kinematic loading. It is shown that the contact interaction of a pneumatic tire and an absolutely rigid road could be represented without crucial loss of accuracy as two typical stages, the adhesion and the slip; the contact area shape remains nevertheless close to a circle. The approximate diagrams are constructed for both lateral force and friction torque; on the initial stage the diagrams are linear so that corresponds to the elastic deformation of a tire while on the second stage both force and torque values are constant and correspond to the dry friction force and torque. For the last stages the approximate formulae for the longitudinal and lateral friction force and the friction torque are constructed on the background of the theory of the dry friction with combined kinematics. The obtained model can be treated as a combination of the Keldysh model of elastic wheel with no slip and spin and the Klimov rigid wheel model interacting with a road by dry friction forces.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"