Результаты поиска по 'throughput':
Найдено статей: 7
  1. Kutovskiy N.A., Nechaevskiy A.V., Ososkov G.A., Pryahina D.I., Trofimov V.V.
    Simulation of interprocessor interactions for MPI-applications in the cloud infrastructure
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 955-963

    А new cloud center of parallel computing is to be created in the Laboratory of Information Technologies (LIT) of the Joint Institute for Nuclear Research JINR) what is expected to improve significantly the efficiency of numerical calculations and expedite the receipt of new physically meaningful results due to the more rational use of computing resources. To optimize a scheme of parallel computations at a cloud environment it is necessary to test this scheme for various combinations of equipment parameters (processor speed and numbers, throughput оf а communication network etc). As a test problem, the parallel MPI algorithm for calculations of the long Josephson junctions (LDJ) is chosen. Problems of evaluating the impact of abovementioned factors of computing mean on the computing speed of the test problem are solved by simulation with the simulation program SyMSim developed in LIT.

    The simulation of the LDJ calculations in the cloud environment enable users without a series of test to find the optimal number of CPUs with a certain type of network run the calculations in a real computer environment. This can save significant computational time in countable resources. The main parameters of the model were obtained from the results of the computational experiment conducted on a special cloud-based testbed. Computational experiments showed that the pure computation time decreases in inverse proportion to the number of processors, but depends significantly on network bandwidth. Comparison of results obtained empirically with the results of simulation showed that the simulation model correctly simulates the parallel calculations performed using the MPI-technology. Besides it confirms our recommendation: for fast calculations of this type it is needed to increase both, — the number of CPUs and the network throughput at the same time. The simulation results allow also to invent an empirical analytical formula expressing the dependence of calculation time by the number of processors for a fixed system configuration. The obtained formula can be applied to other similar studies, but requires additional tests to determine the values of variables.

    Views (last year): 10. Citations: 1 (RSCI).
  2. Nedbailo Y.A., Surchenko A.V., Bychkov I.N.
    Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656

    Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.

    The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.

    The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.

    The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.

    All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.

  3. Chernov I.A.
    High-throughput identification of hydride phase-change kinetics models
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 171-183

    Metal hydrides are an interesting class of chemical compounds that can reversibly bind a large amount of hydrogen and are, therefore, of interest for energy applications. Understanding the factors affecting the kinetics of hydride formation and decomposition is especially important. Features of the material, experimental setup and conditions affect the mathematical description of the processes, which can undergo significant changes during the processing of experimental data. The article proposes a general approach to numerical modeling of the formation and decomposition of metal hydrides and solving inverse problems of estimating material parameters from measurement data. The models are divided into two classes: diffusive ones, that take into account the gradient of hydrogen concentration in the metal lattice, and models with fast diffusion. The former are more complex and take the form of non-classical boundary value problems of parabolic type. A rather general approach to the grid solution of such problems is described. The second ones are solved relatively simply, but can change greatly when model assumptions change. Our experience in processing experimental data shows that a flexible software tool is needed; a tool that allows, on the one hand, building models from standard blocks, freely changing them if necessary, and, on the other hand, avoiding the implementation of routine algorithms. It also should be adapted for high-performance systems of different paradigms. These conditions are satisfied by the HIMICOS library presented in the paper, which has been tested on a large number of experimental data. It allows simulating the kinetics of formation and decomposition of metal hydrides, as well as related tasks, at three levels of abstraction. At the low level, the user defines the interface procedures, such as calculating the time layer based on the previous layer or the entire history, calculating the observed value and the independent variable from the task variables, comparing the curve with the reference. Special algorithms can be used for solving quite general parabolic-type boundary value problems with free boundaries and with various quasilinear (i.e., linear with respect to the derivative only) boundary conditions, as well as calculating the distance between the curves in different metric spaces and with different normalization. This is the middle level of abstraction. At the high level, it is enough to choose a ready tested model for a particular material and modify it in relation to the experimental conditions.

  4. Koganov A.V., Zlobin A.I., Rakcheeva T.A.
    The task of trajectory calculation with the homogenous distribution of results
    Computer Research and Modeling, 2014, v. 6, no. 5, pp. 803-828

    We consider a new set of tests which assigns to detection of human capability for parallel calculation. The new tests support the homogenous statistical distribution of results in distinction to the tests discussed in our previous works. This feature simplifies the analysis of test results and decreases the estimate of statistical error. The new experimental data is close to results obtained in previous experiments.

    Citations: 3 (RSCI).
  5. Cox M.A., Reed R.G., Mellado B.
    The development of an ARM system on chip based processing unit for data stream computing
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 505-509

    Modern big science projects are becoming highly data intensive to the point where offline processing of stored data is infeasible. High data throughput computing, or Data Stream Computing, for future projects is required to deal with terabytes of data per second which cannot be stored in long-term storage elements. Conventional data-centres based on typical server-grade hardware are expensive and are biased towards processing power. The overall I/O bandwidth can be increased with massive parallelism, usually at the expense of excessive processing power and high energy consumption. An ARM System on Chip (SoC) based processing unit may address the issue of system I/O and CPU balance, affordability and energy efficiency since ARM SoCs are mass produced and designed to be energy efficient for use in mobile devices. Such a processing unit is currently in development, with a design goal of 20 Gb/s I/O throughput and significant processing power. The I/O capabilities of consumer ARM System on Chips are discussed along with to-date performance and I/O throughput tests.

    Views (last year): 1.
  6. Reed R.G., Cox M.A., Wrigley T., Mellado B.
    A CPU benchmarking characterization of ARM based processors
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 581-586

    Big science projects are producing data at ever increases rates. Typical techniques involve storing the data to disk, after minor filtering, and then processing it in large computer farms. Data production has reached a point where on-line processing is required in order to filter the data down to manageable sizes. A potential solution involves using low-cost, low-power ARM processors in large arrays to provide massive parallelisation for data stream computing (DSC). The main advantage in using System on Chips (SoCs) is inherent in its design philosophy. SoCs are primarily used in mobile devices and hence consume less power while maintaining relatively good performance. A benchmarking characterisation of three different models of ARM processors will be presented.

    Views (last year): 1.
  7. Wrigley T., Reed R.G., Mellado B.
    Memory benchmarking characterisation of ARM-based SoCs
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 607-613

    Computational intensity is traditionally the focus of large-scale computing system designs, generally leaving such designs ill-equipped to efficiently handle throughput-oriented workloads. In addition, cost and energy consumption considerations for large-scale computing systems in general remain a source of concern. A potential solution involves using low-cost, low-power ARM processors in large arrays in a manner which provides massive parallelisation and high rates of data throughput (relative to existing large-scale computing designs). Giving greater priority to both throughput-rate and cost considerations increases the relevance of primary memory performance and design optimisations to overall system performance. Using several primary memory performance benchmarks to evaluate various aspects of RAM and cache performance, we provide characterisations of the performances of four different models of ARM-based system-on-chip, namely the Cortex-A9, Cortex- A7, Cortex-A15 r3p2 and Cortex-A15 r3p3. We then discuss the relevance of these results to high volume computing and the potential for ARM processors.

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"