Результаты поиска по 'LHC':
Найдено статей: 6
  1. Gavrilov V.B., Golutvin I.A., Kodolova O.L., Korenkov V.V., Levchuk L.G., Shmatov S.V., Tikhonenko E.A., Zhiltsov V.E.
    RDMS CMS computing: current status and plans
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 395-398

    The Compact Muon Solenoid (CMS) is a high-performance general-purpose detector at the Large Hadron Collider (LHC) at CERN. More than twenty institutes from Russia and Joint Institute for Nuclear Research (JINR) are involved in Russia and Dubna Member States (RDMS) CMS Collaboration. A proper computing grid-infrastructure has been constructed at the RDMS institutes for the participation in the running phase of the CMS experiment. Current status of RDMS CMS computing and plans of its development to the next LHC start in 2015 are presented.

    Views (last year): 2.
  2. Smirnova O., Kónya B., Cameron D., Nilsen J.K., Filipčič A.
    ARC-CE: updates and plans
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 407-414

    ARC Compute Element is becoming more popular in WLCG and EGI infrastructures, being used not only in the Grid context, but also as an interface to HPC and Cloud resources. It strongly relies on community contributions, which helps keeping up with the changes in the distributed computing landscape. Future ARC plans are closely linked to the needs of the LHC computing, whichever shape it may take. There are also numerous examples of ARC usage for smaller research communities through national computing infrastructure projects in different countries. As such, ARC is a viable solution for building uniform distributed computing infrastructures using a variety of resources.

  3. Zarochentsev A.K., Stiforov G.G.
    ALICE computing update before start of RUN2
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 415-419

    The report presents a number of news and updates of the ALICE computing for RUN2 and RUN3.

    This includes:

    – implementation in production of a new system EOS;

    – migration to the file system CVMFS to be used for storage of the software;

    – the plan for solving the problem of “Long-Term Data Preservation”;

    – overview of the concept of “O square”, combining offline and online data processing;

    – overview of the existing models to use the virtual clouds for ALICE data processing. Innovations are shown on the example of the Russian sites.

    Views (last year): 2.
  4. Astakhov N.S., Baginyan A.S., Belov S.D., Dolbilov A.G., Golunov A.O., Gorbunov I.N., Gromova N.I., Kashunin I.A., Korenkov V.V., Mitsyn V.V., Shmatov S.V., Strizh T.A., Tikhonenko E.A., Trofimov V.V., Voitishin N.N., Zhiltsov V.E.
    JINR TIER-1-level computing system for the CMS experiment at LHC: status and perspectives
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 455-462

    The Compact Muon Solenoid (CMS) is a high-performance general-purpose detector at the Large Hadron Collider (LHC) at CERN. A distributed data analysis system for processing and further analysis of CMS experimental data has been developed and this model foresees the obligatory usage of modern grid-technologies. The CMS Computing Model makes use of the hierarchy of computing centers (Tiers). The Joint Institute for Nuclear Research (JINR) takes an active part in the CMS experiment. In order to provide a proper computing infrastructure for the CMS experiment at JINR and for Russian institutes collaborating in CMS, Tier-1 center for the CMS experiment is constructing at JINR. The main tasks and services of the CMS Tier-1 at JINR are described. The status and perspectives of the Tier1 center for the CMS experiment at JINR are presented.

    Views (last year): 3. Citations: 2 (RSCI).
  5. Kiryanov A.K.
    GridFTP frontend with redirection for DMlite
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 543-547

    One of the most widely used storage solutions in WLCG is a Disk Pool Manager (DPM) developed and supported by SDC/ID group at CERN. Recently DPM went through a massive overhaul to address scalability and extensibility issues of the old code.

    New system was called DMLite. Unlike the old DPM that was based on daemons, DMLite is arranged as a library that can be loaded directly by an application. This approach greatly improves performance and transaction rate by avoiding unnecessary inter-process communication via network as well as threading bottlenecks.

    DMLite has a modular architecture with its core library providing only the very basic functionality. Backends (storage engines) and frontends (data access protocols) are implemented as plug-in modules. Doubtlessly DMLite wouldn't be able to completely replace DPM without GridFTP as it is used for most of the data transfers in WLCG.

    In DPM GridFTP support was implemented in a Data Storage Interface (DSI) module for Globus’ GridFTP server. In DMLite an effort was made to rewrite a GridFTP module from scratch in order to take advantage of new DMLite features and also implement new functionality. The most important improvement over the old version is a redirection capability.

    With old GridFTP frontend a client needed to contact SRM on the head node in order to obtain a transfer URL (TURL) before reading or writing a file. With new GridFTP frontend this is no longer necessary: a client may connect directly to the GridFTP server on the head node and perform file I/O using only logical file names (LFNs). Data channel is then automatically redirected to a proper disk node.

    This renders the most often used part of SRM unnecessary, simplifies file access and improves performance. It also makes DMLite a more appealing choice for non-LHC VOs that were never much interested in SRM.

    With new GridFTP frontend it's also possible to access data on various DMLite-supported backends like HDFS, S3 and legacy DPM.

    Views (last year): 1.
  6. Berezhnaya A.Ya., Velikhov V.E., Lazin Y.A., Lyalin I.N., Ryabinkin E.A., Tkachenko I.A.
    The Tier-1 resource center at the National Research Centre “Kurchatov Institute” for the experiments, ALICE, ATLAS and LHCb at the Large Hadron Collider (LHC)
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 621-630

    The review of the distributed computing infrastructure of the Tier-1 sites for the Alice, ATLAS, LHCb experiments at the LHC is given. The special emphasis is placed on the main tasks and services of the Tier-1 site, which operates in the Kurchatov Institute in Moscow.

    Views (last year): 2.

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"