Результаты поиска по 'management system':
Найдено статей: 26
  1. Gadzhiev R.I.
    Estimation of probabilistic model of employee labor process
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 969-975

    The mathematical estimation model for employee labor process, built on the basis of Bayesian network is presented in the article. The great attention is given to the estimation of qualitative characteristics of labor product. Usage of described model is supposed in the companies with the management employee workflows system.

    Views (last year): 1.
  2. Kiryanov A.K.
    GridFTP frontend with redirection for DMlite
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 543-547

    One of the most widely used storage solutions in WLCG is a Disk Pool Manager (DPM) developed and supported by SDC/ID group at CERN. Recently DPM went through a massive overhaul to address scalability and extensibility issues of the old code.

    New system was called DMLite. Unlike the old DPM that was based on daemons, DMLite is arranged as a library that can be loaded directly by an application. This approach greatly improves performance and transaction rate by avoiding unnecessary inter-process communication via network as well as threading bottlenecks.

    DMLite has a modular architecture with its core library providing only the very basic functionality. Backends (storage engines) and frontends (data access protocols) are implemented as plug-in modules. Doubtlessly DMLite wouldn't be able to completely replace DPM without GridFTP as it is used for most of the data transfers in WLCG.

    In DPM GridFTP support was implemented in a Data Storage Interface (DSI) module for Globus’ GridFTP server. In DMLite an effort was made to rewrite a GridFTP module from scratch in order to take advantage of new DMLite features and also implement new functionality. The most important improvement over the old version is a redirection capability.

    With old GridFTP frontend a client needed to contact SRM on the head node in order to obtain a transfer URL (TURL) before reading or writing a file. With new GridFTP frontend this is no longer necessary: a client may connect directly to the GridFTP server on the head node and perform file I/O using only logical file names (LFNs). Data channel is then automatically redirected to a proper disk node.

    This renders the most often used part of SRM unnecessary, simplifies file access and improves performance. It also makes DMLite a more appealing choice for non-LHC VOs that were never much interested in SRM.

    With new GridFTP frontend it's also possible to access data on various DMLite-supported backends like HDFS, S3 and legacy DPM.

    Views (last year): 1.
  3. Reed R.G., Cox M.A., Wrigley T., Mellado B.
    A CPU benchmarking characterization of ARM based processors
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 581-586

    Big science projects are producing data at ever increases rates. Typical techniques involve storing the data to disk, after minor filtering, and then processing it in large computer farms. Data production has reached a point where on-line processing is required in order to filter the data down to manageable sizes. A potential solution involves using low-cost, low-power ARM processors in large arrays to provide massive parallelisation for data stream computing (DSC). The main advantage in using System on Chips (SoCs) is inherent in its design philosophy. SoCs are primarily used in mobile devices and hence consume less power while maintaining relatively good performance. A benchmarking characterisation of three different models of ARM processors will be presented.

    Views (last year): 1.
  4. Bogdanov A.V., Thurein Kyaw L.
    Query optimization in relational database systems and cloud computing technology
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 649-655

    Optimization is the heart of relational Database Management System (DMBS). Its can analyzes the SQL statements and determines the most efficient access plan to satisfy every query request. Optimization can solves this problem and analyzes SQL statements specifying which tables and columns are available. And then request the information system and statistical data stored in the system directory, to determine the best method of solving the tasks required to comply with the query requests.

    Views (last year): 1.
  5. Tkachenko I.A.
    Experience of puppet usage for managment of Tier-1 GRID cluster at NRC “Kurchatov Institute”
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 735-740

    This article is about the organization of the cluster management using puppet. It tells about: safety of usage, from the point of view of mass apply at a computing cluster wrong configuration (by reason of human factor); collaboration work and the creation of opportunities for each cluster administrator, regardless of others, writing and debugging your own scripts, before include them in the overall system of cluster managment; writing scripts, which allow to get as fully configured nodes, and updates the configuration of any system parts, without affecting the rest of the nodes components, regardless of the current state of the node of computing cluster.

    The article compares different methods of the creation of the hierarchy of puppet scenarios, describes problems associated with the use of “include” for the organization hierarchy, and tells about the transition to a system of sequential call classes through shell-script.

  6. Ustimenko O.V.
    Features DIRAC data management
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 741-744

    The report presents an analysis of Big Data storage solutions in different directions. The purpose of this paper is to introduce the technology of Big Data storage, prospects of storage technologies, for example, the software DIRAC. The DIRAC is a software framework for distributed computing.

    The report considers popular storage technologies and lists their limitations. The main problems are the storage of large data, the lack of quality in the processing, scalability, the lack of rapid availability, the lack of implementation of intelligent data retrieval.

    Experimental computing tasks demand a wide range of requirements in terms of CPU usage, data access or memory consumption and unstable profile of resource use for a certain period. The DIRAC Data Management System (DMS), together with the DIRAC Storage Management System (SMS) provides the necessary functionality to execute and control all the activities related with data.

    Views (last year): 2.
Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"