All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Approaches to cloud infrastructures integration
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 583-590Views (last year): 6. Citations: 11 (RSCI).One of the important direction of cloud technologies development nowadays is a creation of methods for integration of various cloud infrastructures. An actuality of such direction in academic field is caused by a frequent lack of own computing resources and a necessity to attract additional ones. This article is dedicated to existing approaches to cloud infrastructures integration with each other: federations and so called ‘cloud bursting’. A ‘federation’ in terms of OpenNebula cloud platform is built on a ‘one master zone and several slave ones’ schema. A term ‘zone’ means a separate cloud infrastructure in the federation. All zones in such kind of integration have a common database of users and the whole federation is managed via master zone only. Such approach is most suitable for a case when cloud infrastructures of geographically distributed branches of a single organization need to be integrated. But due to its high centralization it's not appropriate when one needs to join cloud infrastructures of different organizations. Moreover it's not acceptable at all in case of clouds based on different software platforms. A model of federative integration implemented in EGI Federated Cloud allows to connect clouds based on different software platforms but it requires a deployment of sufficient amount of additional services which are specific for EGI Federated Cloud only. It makes such approach is one-purpose and uncommon one. A ‘cloud bursting’ model has no limitations listed above but in case of OpenNebula platform what the Laboratory of Information Technologies of Joint Institute for Nuclear Research (LIT JINR) cloud infrastructure is based on such model was implemented for an integration with a certain set of commercial cloud resources providers. Taking into account an article authors’ experience in joining clouds of organizations they represent as well as with EGI Federation Cloud a ‘cloud bursting’ driver was developed by LIT JINR cloud team for OpenNebula-based clouds integration with each other as well as with OpenStack-based ones. The driver's architecture, technologies and protocols it relies on and an experience of its usage are described in the article.
-
Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.
-
Water consumption control model for regions with low water availability
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1395-1410This paper considers the problem of water consumption in the regions of Russia with low water availability. We provide a review of the existing methods to control quality and quantity of water resources at different scales — from households to worldwide. The paper itself considers regions with low “water availability” parameter which is amount of water per person per year. Special attention is paid to the regions, where this parameter is low because of natural features of the region, not because of high population. In such regions many resources are spend on water processing infrastructure to store water and transport water from other regions. In such regions the main water consumers are industry and agriculture.
We propose dynamic two-level hierarchical model which matches water consumption of a region with its gross regional product. On the top level there is a regional administration (supervisor) and on the lower level there are region enterprises (agents). The supervisor sets fees for water consumption. We study the model with Pontryagin’s maximum principle and provide agents’s optimal control in analytical form. For the supervisor’s control we provide numerical algorithm. The model has six free coefficients, which can be chosen so the model represents a particular region. We use data from Russia Federal State Statistics Service for identification process of a model. For numerical analysis we use trust region reflective algorithms. We provide calculations for a few regions with low water availability. It is shown that it is possible to reduce water consumption of a region more than by 20% while gross regional product drop is less than 10%.
-
Modeling the number of employed, unemployed and economically inactive population in the Russian Far East
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 251-264Studies of the crisis socio-demographic situation in the Russian Far East require not only the use of traditional statistical methods, but also a conceptual analysis of possible development scenarios based on the synergy principles. The article is devoted to the analysis and modeling of the number of employed, unemployed and economically inactive population using nonlinear autonomous differential equations. We studied a basic mathematical model that takes into account the principle of pair interactions, which is a special case of the model for the struggle between conditional information of D. S. Chernavsky. The point estimates for the parameters are found using least squares method adapted for this model. The average approximation error was no more than 5.17%. The calculated parameter values correspond to the unstable focus and the oscillations with increasing amplitude of population number in the asymptotic case, which indicates a gradual increase in disparities between the employed, unemployed and economically inactive population and a collapse of their dynamics. We found that in the parametric space, not far from the inertial scenario, there are domains of blow-up and chaotic regimes complicating the ability to effectively manage. The numerical study showed that a change in only one model parameter (e.g. migration) without complex structural socio-economic changes can only delay the collapse of the dynamics in the long term or leads to the emergence of unpredictable chaotic regimes. We found an additional set of the model parameters corresponding to sustainable dynamics (stable focus) which approximates well the time series of the considered population groups. In the mathematical model, the bifurcation parameters are the outflow rate of the able-bodied population, the fertility (“rejuvenation of the population”), as well as the migration inflow rate of the unemployed. We found that the transition to stable regimes is possible with the simultaneous impact on several parameters which requires a comprehensive set of measures to consolidate the population in the Russian Far East and increase the level of income in terms of compensation for infrastructure sparseness. Further economic and sociological research is required to develop specific state policy measures.
-
An interactive tool for developing distributed telemedicine systems
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 521-527Views (last year): 3. Citations: 4 (RSCI).Getting a qualified medical examination can be difficult for people in remote areas because medical staff available can either be inaccessible or it might lack expert knowledge at proper level. Telemedicine technologies can help in such situations. On one hand, such technologies allow highly qualified doctors to consult remotely, thereby increasing the quality of diagnosis and plan treatment. On the other hand, computer-aided analysis of the research results, anamnesis and information on similar cases assist medical staff in their routine activities and decision-making.
Creating telemedicine system for a particular domain is a laborious process. It’s not sufficient to pick proper medical experts and to fill the knowledge base of the analytical module. It’s also necessary to organize the entire infrastructure of the system to meet the requirements in terms of reliability, fault tolerance, protection of personal data and so on. Tools with reusable infrastructure elements, which are common to such systems, are able to decrease the amount of work needed for the development of telemedicine systems.
An interactive tool for creating distributed telemedicine systems is described in the article. A list of requirements for the systems is presented; structural solutions for meeting the requirements are suggested. A composition of such elements applicable for distributed systems is described in the article. A cardiac telemedicine system is described as a foundation of the tool
-
Exact calculation of a posteriori probability distribution with distributed computing systems
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 539-542Views (last year): 3.We'd like to present a specific grid infrastructure and web application development and deployment. The purpose of infrastructure and web application is to solve particular geophysical problems that require heavy computational resources. Here we cover technology overview and connector framework internals. The connector framework links problem-specific routines with middleware in a manner that developer of application doesn't have to be aware of any particular grid software. That is, the web application built with this framework acts as an interface between the user 's web browser and Grid's (often very) own middleware.
Our distributed computing system is built around Gridway metascheduler. The metascheduler is connected to TORQUE resource managers of virtual compute nodes that are being run atop of compute cluster utilizing the virtualization technology. Such approach offers several notable features that are unavailable to bare-metal compute clusters.
The first application we've integrated with our framework is seismic anisotropic parameters determination by inversion of SKS and converted phases. We've used probabilistic approach to inverse problem solution based on a posteriory probability distribution function (APDF) formalism. To get the exact solution of the problem we have to compute the values of multidimensional function. Within our implementation we used brute-force APDF calculation on rectangular grid across parameter space.
The result of computation is stored in relational DBMS and then represented in familiar human-readable form. Application provides several instruments to allow analysis of function's shape by computational results: maximum value distribution, 2D cross-sections of APDF, 2D marginals and a few other tools. During the tests we've run the application against both synthetic and observed data.
-
Defining volunteer computing: a formal approach
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 565-571Volunteer computing resembles private desktop grids whereas desktop grids are not fully equivalent to volunteer computing. There are several attempts to distinguish and categorize them using informal and formal methods. However, most formal approaches model a particular middleware and do not focus on the general notion of volunteer or desktop grid computing. This work makes an attempt to formalize their characteristics and relationship. To this end formal modeling is applied that tries to grasp the semantic of their functionalities — as opposed to comparisons based on properties, features, etc. We apply this modeling method to formalize the Berkeley Open Infrastructure for Network Computing (BOINC) [Anderson D. P., 2004] volunteer computing system.
-
The Tier-1 resource center at the National Research Centre “Kurchatov Institute” for the experiments, ALICE, ATLAS and LHCb at the Large Hadron Collider (LHC)
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 621-630Views (last year): 2.The review of the distributed computing infrastructure of the Tier-1 sites for the Alice, ATLAS, LHCb experiments at the LHC is given. The special emphasis is placed on the main tasks and services of the Tier-1 site, which operates in the Kurchatov Institute in Moscow.
-
Basic directions of information technology in National Academy of Sciences of Azerbaijan
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 657-660Views (last year): 6. Citations: 1 (RSCI).Grid is a new type of computing infrastructure, is intensively developed in today world of information technologies. Grid provides global integration of information and computing resources. The essence Conception of GRID in Azerbaijan is to create a set of standardized services to provide a reliable, compatible, inexpensive and secure access to geographically distributed high-tech information and computing resources a separate computer, cluster and supercomputing centers, information storage, networks, scientific tools etc.
-
Using CERN cloud technologies for the further ATLAS TDAQ software development and for its application for the remote sensing data processing in the space monitoring tasks
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 683-689Views (last year): 2.The CERN cloud technologies (the CernVM project) give a new possibility for the software developers. The participation of the JINR ATLAS TDAQ working group in the software development for distributed data acquisition and processing system (TDAQ) of the ATLAS experiment (CERN) involves the work in the condition of the dynamically developing system and its infrastructure. The CERN cloud technologies, especially CernVM, provide the most effective access as to the TDAQ software as to the third-part software used in ATLAS. The access to the Scientific Linux environment is provided by CernVM virtual machines and the access software repository — by CernVM-FS. The problem of the functioning of the TDAQ middleware in the CernVM environment was studied in this work. The CernVM usage is illustrated on three examples: the development of the packages Event Dump and Webemon, and the adaptation of the data quality auto checking system of the ATLAS TDAQ (Data Quality Monitoring Framework) for the radar data assessment.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




