All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
GridFTP frontend with redirection for DMlite
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 543-547Views (last year): 1.One of the most widely used storage solutions in WLCG is a Disk Pool Manager (DPM) developed and supported by SDC/ID group at CERN. Recently DPM went through a massive overhaul to address scalability and extensibility issues of the old code.
New system was called DMLite. Unlike the old DPM that was based on daemons, DMLite is arranged as a library that can be loaded directly by an application. This approach greatly improves performance and transaction rate by avoiding unnecessary inter-process communication via network as well as threading bottlenecks.
DMLite has a modular architecture with its core library providing only the very basic functionality. Backends (storage engines) and frontends (data access protocols) are implemented as plug-in modules. Doubtlessly DMLite wouldn't be able to completely replace DPM without GridFTP as it is used for most of the data transfers in WLCG.
In DPM GridFTP support was implemented in a Data Storage Interface (DSI) module for Globus’ GridFTP server. In DMLite an effort was made to rewrite a GridFTP module from scratch in order to take advantage of new DMLite features and also implement new functionality. The most important improvement over the old version is a redirection capability.
With old GridFTP frontend a client needed to contact SRM on the head node in order to obtain a transfer URL (TURL) before reading or writing a file. With new GridFTP frontend this is no longer necessary: a client may connect directly to the GridFTP server on the head node and perform file I/O using only logical file names (LFNs). Data channel is then automatically redirected to a proper disk node.
This renders the most often used part of SRM unnecessary, simplifies file access and improves performance. It also makes DMLite a more appealing choice for non-LHC VOs that were never much interested in SRM.
With new GridFTP frontend it's also possible to access data on various DMLite-supported backends like HDFS, S3 and legacy DPM.
-
3D molecular dynamic simulation of thermodynamic equilibrium problem for heated nickel
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 573-579Views (last year): 2.This work is devoted to molecular dynamic modeling of the thermal impact processes on the metal sample consisting of nickel atoms. For the solution of this problem, a continuous mathematical model on the basis of the classical Newton mechanics equations has been used; a numerical method based on the Verlet scheme has been chosen; a parallel algorithm has been offered, and its realization within the MPI and OpenMP technologies has been executed. By means of the developed parallel program, the investigation of thermodynamic equilibrium of nickel atoms’ system under the conditions of heating a sample to desired temperature has been executed. In numerical experiments both optimum parameters of calculation procedure and physical parameters of analyzed process have been defined. The obtained numerical results are well corresponding to known theoretical and experimental data.
-
A CPU benchmarking characterization of ARM based processors
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 581-586Views (last year): 1.Big science projects are producing data at ever increases rates. Typical techniques involve storing the data to disk, after minor filtering, and then processing it in large computer farms. Data production has reached a point where on-line processing is required in order to filter the data down to manageable sizes. A potential solution involves using low-cost, low-power ARM processors in large arrays to provide massive parallelisation for data stream computing (DSC). The main advantage in using System on Chips (SoCs) is inherent in its design philosophy. SoCs are primarily used in mobile devices and hence consume less power while maintaining relatively good performance. A benchmarking characterisation of three different models of ARM processors will be presented.
-
The Tier-1 resource center at the National Research Centre “Kurchatov Institute” for the experiments, ALICE, ATLAS and LHCb at the Large Hadron Collider (LHC)
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 621-630Views (last year): 2.The review of the distributed computing infrastructure of the Tier-1 sites for the Alice, ATLAS, LHCb experiments at the LHC is given. The special emphasis is placed on the main tasks and services of the Tier-1 site, which operates in the Kurchatov Institute in Moscow.
-
Cataloging technology of information fund
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 661-673Views (last year): 3.The article discusses the approach to the improvement of information processing technology on the basis of logical-semantic network (LSN) Question–Answer–Reaction aimed at formation and support of the catalog service providing efficient search of answers to questions.
The basis of such a catalog service are semantic links, reflecting the logic of presentation of the author's thoughts within the framework this publication, theme, subject area. Structuring and support of these links will allow working with a field of meanings, providing new opportunities for the study the corps of digital libraries documents. Cataloging of the information fund includes: formation of lexical dictionary; formation of the classification tree for several bases; information fund classification for question–answer topics; formation of the search queries that are adequate classification trees the question–answer; automated search queries on thematic search engines; analysis of the responses to queries; LSN catalog support during the operational phase (updating and refinement of the catalog). The technology is considered for two situations: 1) information fund has already been formed; 2) information fund is missing, you must create it.
-
Using CERN cloud technologies for the further ATLAS TDAQ software development and for its application for the remote sensing data processing in the space monitoring tasks
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 683-689Views (last year): 2.The CERN cloud technologies (the CernVM project) give a new possibility for the software developers. The participation of the JINR ATLAS TDAQ working group in the software development for distributed data acquisition and processing system (TDAQ) of the ATLAS experiment (CERN) involves the work in the condition of the dynamically developing system and its infrastructure. The CERN cloud technologies, especially CernVM, provide the most effective access as to the TDAQ software as to the third-part software used in ATLAS. The access to the Scientific Linux environment is provided by CernVM virtual machines and the access software repository — by CernVM-FS. The problem of the functioning of the TDAQ middleware in the CernVM environment was studied in this work. The CernVM usage is illustrated on three examples: the development of the packages Event Dump and Webemon, and the adaptation of the data quality auto checking system of the ATLAS TDAQ (Data Quality Monitoring Framework) for the radar data assessment.
-
Synthesis of the simulation and monitoring processes for the development of big data storage and processing facilities in physical experiments
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 691-698Views (last year): 4. Citations: 6 (RSCI).The paper presents a new grid and cloud services simulation system. This system is developed in LIT JINR, Dubna, and it is aimed at improving the efficiency of the grid-cloud systems development by using work quality indicators of some real system to design and predict its evolution. For these purpose, simulation program is combined with real monitoring system of the grid-cloud service through a special database. The paper provides an example of the program usage to simulate a sufficiently general cloud structure, which can be used for more common purposes.
-
Views (last year): 2.
The report presents an analysis of Big Data storage solutions in different directions. The purpose of this paper is to introduce the technology of Big Data storage, prospects of storage technologies, for example, the software DIRAC. The DIRAC is a software framework for distributed computing.
The report considers popular storage technologies and lists their limitations. The main problems are the storage of large data, the lack of quality in the processing, scalability, the lack of rapid availability, the lack of implementation of intelligent data retrieval.
Experimental computing tasks demand a wide range of requirements in terms of CPU usage, data access or memory consumption and unstable profile of resource use for a certain period. The DIRAC Data Management System (DMS), together with the DIRAC Storage Management System (SMS) provides the necessary functionality to execute and control all the activities related with data.
-
Views (last year): 7.
Nowadays cloud computing is an important topic in the field of information technology and computer system. Several companies and educational institutes have deployed cloud infrastructures to overcome their problems such as easy data access, software updates with minimal cost, large or unlimited storage, efficient cost factor, backup storage and disaster recovery, and some other benefits if compare with the traditional network infrastructures. The paper present the study of cloud computing technology for marine environmental data and processing. Cloud computing of marine environment information is proposed for the integration and sharing of marine information resources. It is highly desirable to perform empirical requiring numerous interactions with web servers and transfers of very large archival data files without affecting operational information system infrastructure. In this paper, we consider the cloud computing for virtual testbed to minimize the cost. That is related to real time infrastructure.
-
Natural models of parallel computations
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 781-785Views (last year): 17. Citations: 2 (RSCI).Course “Natural models of parallel computing”, given for senior students of the Faculty of Computational Mathematics and Cybernetics, Moscow State University, is devoted to the issues of supercomputer implementation of natural computational models and is, in fact, an introduction to the theory of natural computing, a relatively new branch of science, formed at the intersection of mathematics, computer science and natural sciences (especially biology). Topics of the natural computing include both already classic subjects such as cellular automata, and relatively new, introduced in the last 10–20 years, such as swarm intelligence. Despite its biological origin, all these models are widely applied in the fields related to computer data processing. Research in the field of natural computing is closely related to issues and technology of parallel computing. Presentation of theoretical material of the course is accompanied by a consideration of the possible schemes for parallel computing, in the practical part of the course it is supposed to perform by the students a software implementation using MPI technology and numerical experiments to investigate the effectiveness of the chosen schemes of parallel computing.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




