All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Applications of on-demand virtual clusters to high performance computing
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 511-516Views (last year): 1.Virtual machines are usually associated with an ability to create them on demand by calling web services, then these machines are used to deliver resident services to their clients; however, providing clients with an ability to run an arbitrary programme on the newly created machines is beyond their power. Such kind of usage is useful in a high performance computing environment where most of the resources are consumed by batch programmes and not by daemons or services. In this case a cluster of virtual machines is created on demand to run a distributed or parallel programme and to save its output to a network attached storage. Upon completion this cluster is destroyed and resources are released. With certain modifications this approach can be extended to interactively deliver computational resources to the user thus providing virtual desktop as a service. Experiments show that the process of creating virtual clusters on demand can be made efficient in both cases.
-
Methods of evaluating the effectiveness of systems for computing resources monitoring
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 661-668Views (last year): 2. Citations: 2 (RSCI).This article discusses the contribution of computing resources monitoring system to the work of a distributed computing system. Method of evaluation of this contribution and performance monitoring system based on measures of certainty the state-controlled system is proposed. The application of this methodology in the design and development of local monitoring of the Central Information and Computing Complex, Joint Institute for Nuclear Research is listed.
-
Efficient processing and classification of wave energy spectrum data with a distributed pipeline
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 517-520Views (last year): 3. Citations: 2 (RSCI).Processing of large amounts of data often consists of several steps, e.g. pre- and post-processing stages, which are executed sequentially with data written to disk after each step, however, when pre-processing stage for each task is different the more efficient way of processing data is to construct a pipeline which streams data from one stage to another. In a more general case some processing stages can be factored into several parallel subordinate stages thus forming a distributed pipeline where each stage can have multiple inputs and multiple outputs. Such processing pattern emerges in a problem of classification of wave energy spectra based on analytic approximations which can extract different wave systems and their parameters (e.g. wave system type, mean wave direction) from spectrum. Distributed pipeline approach achieves good performance compared to conventional “sequential-stage” processing.
-
An interactive tool for developing distributed telemedicine systems
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 521-527Views (last year): 3. Citations: 4 (RSCI).Getting a qualified medical examination can be difficult for people in remote areas because medical staff available can either be inaccessible or it might lack expert knowledge at proper level. Telemedicine technologies can help in such situations. On one hand, such technologies allow highly qualified doctors to consult remotely, thereby increasing the quality of diagnosis and plan treatment. On the other hand, computer-aided analysis of the research results, anamnesis and information on similar cases assist medical staff in their routine activities and decision-making.
Creating telemedicine system for a particular domain is a laborious process. It’s not sufficient to pick proper medical experts and to fill the knowledge base of the analytical module. It’s also necessary to organize the entire infrastructure of the system to meet the requirements in terms of reliability, fault tolerance, protection of personal data and so on. Tools with reusable infrastructure elements, which are common to such systems, are able to decrease the amount of work needed for the development of telemedicine systems.
An interactive tool for creating distributed telemedicine systems is described in the article. A list of requirements for the systems is presented; structural solutions for meeting the requirements are suggested. A composition of such elements applicable for distributed systems is described in the article. A cardiac telemedicine system is described as a foundation of the tool
-
Visualization of work of a distributed application based on the mqcloud library
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 529-532Citations: 1 (RSCI).Independent components communicating with each other due to complex control make the work of complex distributed computer systems poorly scalable within the framework of the existing communication middleware. Two major problems of such systems' scaling can be defined: overloading of unequal nodes due to proportional redistribution of workload and difficulties in the realization of continuous communication between several nodes of the system. This paper is focused on the developed solution enabling visualization of the work of such a dynamical system.
-
Communication-efficient solution of distributed variational inequalities using biased compression, data similarity and local updates
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1813-1827Variational inequalities constitute a broad class of problems with applications in a number of fields, including game theory, economics, and machine learning. Today’s practical applications of VIs are becoming increasingly computationally demanding. It is therefore necessary to employ distributed computations to solve such problems in a reasonable time. In this context, workers have to exchange data with each other, which creates a communication bottleneck. There are three main techniques to reduce the cost and the number of communications: the similarity of local operators, the compression of messages and the use of local steps on devices. There is an algorithm that uses all of these techniques to solve the VI problem and outperforms all previous methods in terms of communication complexity. However, this algorithm is limited to unbiased compression. Meanwhile, biased (contractive) compression leads to better results in practice, but it requires additional modifications within an algorithm and more effort to prove the convergence. In this work, we develop a new algorithm that solves distributed VI problems using data similarity, contractive compression and local steps on devices, derive the theoretical convergence of such an algorithm, and perform some experiments to show the applicability of the method.
-
Decomposition of the modeling task of some objects of archeological research for processing in a distributed computer system
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 533-537Views (last year): 1. Citations: 2 (RSCI).Although each task of recreating artifacts is truly unique, the modeling process for façades, foundations and building elements can be parametrized. This paper is focused on a complex of the existing programming libraries and solutions that need to be united into a single computer system to solve such a task. An algorithm of generating 3D filling of objects under reconstruction is presented. The solution architecture necessary for the system's adaptation for a cloud environment is studied.
-
Exact calculation of a posteriori probability distribution with distributed computing systems
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 539-542Views (last year): 3.We'd like to present a specific grid infrastructure and web application development and deployment. The purpose of infrastructure and web application is to solve particular geophysical problems that require heavy computational resources. Here we cover technology overview and connector framework internals. The connector framework links problem-specific routines with middleware in a manner that developer of application doesn't have to be aware of any particular grid software. That is, the web application built with this framework acts as an interface between the user 's web browser and Grid's (often very) own middleware.
Our distributed computing system is built around Gridway metascheduler. The metascheduler is connected to TORQUE resource managers of virtual compute nodes that are being run atop of compute cluster utilizing the virtualization technology. Such approach offers several notable features that are unavailable to bare-metal compute clusters.
The first application we've integrated with our framework is seismic anisotropic parameters determination by inversion of SKS and converted phases. We've used probabilistic approach to inverse problem solution based on a posteriory probability distribution function (APDF) formalism. To get the exact solution of the problem we have to compute the values of multidimensional function. Within our implementation we used brute-force APDF calculation on rectangular grid across parameter space.
The result of computation is stored in relational DBMS and then represented in familiar human-readable form. Application provides several instruments to allow analysis of function's shape by computational results: maximum value distribution, 2D cross-sections of APDF, 2D marginals and a few other tools. During the tests we've run the application against both synthetic and observed data.
-
Distributed dCache-based storage system of UB RAS
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 559-563Citations: 3 (RSCI).The approach to build territorial distributed storage system for high performance computing environment of UB RAS is presented. The storage system is based on the dCache middleware from the European Middleware Initiative project. The first milestone of distributed storage system implementation includes the data centers at the two UB RAS Regions: Yekaterinburg and Perm.
-
Development of distributed computing applications and services with Everest cloud platform
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 593-599Views (last year): 6. Citations: 2 (RSCI).The use of service-oriented approach in scientific domains can increase research productivity by enabling sharing, publication and reuse of computing applications, as well as automation of scientific workflows. Everest is a cloud platform that enables researchers with minimal skills to publish and use scientific applications as services. In contrast to existing solutions, Everest executes applications on external resources attached by users, implements flexible binding of resources to applications and supports programmatic access to the platform's functionality. The paper presents current state of the platform, recent developments and remaining challenges.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




