All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Cloud interpretation of the entropy model for calculating the trip matrix
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 89-103As the population of cities grows, the need to plan for the development of transport infrastructure becomes more acute. For this purpose, transport modeling packages are created. These packages usually contain a set of convex optimization problems, the iterative solution of which leads to the desired equilibrium distribution of flows along the paths. One of the directions for the development of transport modeling is the construction of more accurate generalized models that take into account different types of passengers, their travel purposes, as well as the specifics of personal and public modes of transport that agents can use. Another important direction of transport models development is to improve the efficiency of the calculations performed. Since, due to the large dimension of modern transport networks, the search for a numerical solution to the problem of equilibrium distribution of flows along the paths is quite expensive. The iterative nature of the entire solution process only makes this worse. One of the approaches leading to a reduction in the number of calculations performed is the construction of consistent models that allow to combine the blocks of a 4-stage model into a single optimization problem. This makes it possible to eliminate the iterative running of blocks, moving from solving a separate optimization problem at each stage to some general problem. Early work has proven that such approaches provide equivalent solutions. However, it is worth considering the validity and interpretability of these methods. The purpose of this article is to substantiate a single problem, that combines both the calculation of the trip matrix and the modal choice, for the generalized case when there are different layers of demand, types of agents and classes of vehicles in the transport network. The article provides possible interpretations for the gauge parameters used in the problem, as well as for the dual factors associated with the balance constraints. The authors of the article also show the possibility of combining the considered problem with a block for determining network load into a single optimization problem.
-
JINR TIER-1-level computing system for the CMS experiment at LHC: status and perspectives
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 455-462Views (last year): 3. Citations: 2 (RSCI).The Compact Muon Solenoid (CMS) is a high-performance general-purpose detector at the Large Hadron Collider (LHC) at CERN. A distributed data analysis system for processing and further analysis of CMS experimental data has been developed and this model foresees the obligatory usage of modern grid-technologies. The CMS Computing Model makes use of the hierarchy of computing centers (Tiers). The Joint Institute for Nuclear Research (JINR) takes an active part in the CMS experiment. In order to provide a proper computing infrastructure for the CMS experiment at JINR and for Russian institutes collaborating in CMS, Tier-1 center for the CMS experiment is constructing at JINR. The main tasks and services of the CMS Tier-1 at JINR are described. The status and perspectives of the Tier1 center for the CMS experiment at JINR are presented.
-
Cloud technologies are already wide spread among IT industry and start to gain popularity in academic field. There are several fundamental cloud models: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). The article describes the cloud infrastructure deployed at the Laboratory of Information Technologies of the Joint Institute for Nuclear Research (LIT JINR). It explains the goals of the cloud infrastructure creation, specifics of the implementation, its utilization, current work and plans for development.
Keywords: cloud technologies, virtualization.Views (last year): 1. Citations: 5 (RSCI). -
Empirical testing of institutional matrices theory by data mining
Computer Research and Modeling, 2015, v. 7, no. 4, pp. 923-939The paper has a goal to identify a set of parameters of the environment and infrastructure with the most significant impact on institutional-matrices that dominate in different countries. Parameters of environmental conditions includes raw statistical indices, which were directly derived from the databases of open access, as well as complex integral indicators that were by method of principal components. Efficiency of discussed parameters in task of dominant institutional matrices type recognition (X or Y type) was evaluated by a number of methods based on machine learning. It was revealed that greatest informational content is associated with parameters characterizing risk of natural disasters, level of urbanization and the development of transport infrastructure, the monthly averages and seasonal variations of temperature and precipitation.
Keywords: institutional matrices theory, machine learning.Views (last year): 7. Citations: 13 (RSCI). -
Improving the quality of route generation in SUMO based on data from detectors using reinforcement learning
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 137-146This work provides a new approach for constructing high-precision routes based on data from transport detectors inside the SUMO traffic modeling package. Existing tools such as flowrouter and routeSampler have a number of disadvantages, such as the lack of interaction with the network in the process of building routes. Our rlRouter uses multi-agent reinforcement learning (MARL), where the agents are incoming lanes and the environment is the road network. By performing actions to launch vehicles, agents receive a reward for matching data from transport detectors. Parameter Sharing DQN with the LSTM backbone of the Q-function was used as an algorithm for multi-agent reinforcement learning.
Since the rlRouter is trained inside the SUMO simulation, it can restore routes better by taking into account the interaction of vehicles within the network with each other and with the network infrastructure. We have modeled diverse traffic situations on three different junctions in order to compare the performance of SUMO’s routers with the rlRouter. We used Mean Absoluter Error (MAE) as the measure of the deviation from both cumulative detectors and routes data. The rlRouter achieved the highest compliance with the data from the detectors. We also found that by maximizing the reward for matching detectors, the resulting routes also get closer to the real ones. Despite the fact that the routes recovered using rlRouter are superior to the routes obtained using SUMO tools, they do not fully correspond to the real ones, due to the natural limitations of induction-loop detectors. To achieve more plausible routes, it is necessary to equip junctions with other types of transport counters, for example, camera detectors.
-
Modeling the impact of epidemic spread and lockdown on economy
Computer Research and Modeling, 2025, v. 17, no. 2, pp. 339-363Epidemics severely destabilize economies by reducing productivity, weakening consumer spending, and overwhelming public infrastructure, often culminating in economic recessions. The COVID-19 pandemic underscored the critical role of nonpharmaceutical interventions, such as lockdowns, in containing infectious disease transmission. This study investigates how the progression of epidemics and the implementation of lockdown policies shape the economic well-being of populations. By integrating compartmental ordinary differential equation (ODE) models, the research analyzes the interplay between epidemic dynamics and economic outcomes, particularly focusing on how varying lockdown intensities influence both disease spread and population wealth. Findings reveal that epidemics inflict significant economic damage, but timely and stringent lockdowns can mitigate healthcare system overload by sharply reducing infection peaks and delaying the epidemic’s trajectory. However, carefully timed lockdown relaxation is equally vital to prevent resurgent outbreaks. The study identifies key epidemiological thresholds—such as transmission rates, recovery rates, and the basic reproduction number $(\mathfrak{R}0)$ — that determine the effectiveness of lockdowns. Analytically, it pinpoints the optimal proportion of isolated individuals required to minimize total infections in scenarios where permanent immunity is assumed. Economically, the analysis quantifies lockdown impacts by tracking population wealth, demonstrating that economic outcomes depend heavily on the fraction of isolated individuals who remain economically productive. Higher proportions of productive individuals during lockdowns correlate with better wealth retention, even under fixed epidemic conditions. These insights equip policymakers with actionable frameworks to design balanced lockdown strategies that curb disease spread while safeguarding economic stability during future health crises.
-
Running applications on a hybrid cluster
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 475-483Views (last year): 4.A hybrid cluster implies the use of computational devices with radically different architectures. Usually, these are conventional CPU architecture (e.g. x86_64) and GPU architecture (e. g. NVIDIA CUDA). Creating and exploiting such a cluster requires some experience: in order to harness all computational power of the described system and get substantial speedup for computational tasks many factors should be taken into account. These factors consist of hardware characteristics (e.g. network infrastructure, a type of data storage, GPU architecture) as well as software stack (e.g. MPI implementation, GPGPU libraries). So, in order to run scientific applications GPU capabilities, software features, task size and other factors should be considered.
This report discusses opportunities and problems of hybrid computations. Some statistics from tests programs and applications runs will be demonstrated. The main focus of interest is open source applications (e. g. OpenFOAM) that support GPGPU (with some parts rewritten to use GPGPU directly or by replacing libraries).
There are several approaches to organize heterogeneous computations for different GPU architectures out of which CUDA library and OpenCL framework are compared. CUDA library is becoming quite typical for hybrid systems with NVIDIA cards, but OpenCL offers portability opportunities which can be a determinant factor when choosing framework for development. We also put emphasis on multi-GPU systems that are often used to build hybrid clusters. Calculations were performed on a hybrid cluster of SPbU computing center.
-
The model of interference of long waves of economic development
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 649-663The article substantiates the need to develop and analyze mathematical models that take into account the mutual influence of long (Kondratiev) waves of economic development. The analysis of the available publications shows that at the model level, the direct and inverse relationships between intersecting long waves are still insufficiently studied. As practice shows, the production of the current long wave can receive an additional impetus for growth from the technologies of the next long wave. The technologies of the next industrial revolution often serve as improving innovations for the industries born of the previous industrial revolution. As a result, the new long wave increases the amplitude of the oscillations of the trajectory of the previous long wave. Such results of the interaction of long waves in the economy are similar to the effects of interference of physical waves. The mutual influence of the recessions and booms of the economies of different countries gives even more grounds for comparing the consequences of this mutual influence with the interference of physical waves. The article presents a model for the development of the technological base of production, taking into account the possibilities of combining old and new technologies. The model consists of several sub-models. The use of a different mathematical description for the individual stages of updating the technological base of production allows us to take into account the significant differences between the successive phases of the life cycle of general purpose technologies, considered in modern literature as the technological basis of industrial revolutions. One of these phases is the period of formation of the appropriate infrastructure necessary for the intensive diffusion of new general purpose technology, for the rapid development of industries using this technology. The model is used for illustrative calculations with the values of exogenous parameters corresponding to the logic of changing long waves. Despite all the conditionality of the illustrative calculations, the configuration of the curve representing the change in the return on capital in the simulated period is close to the configuration of the real trajectory of the return on private fixed assets of the US economy in the period 1982-2019. The factors that remained outside the scope of the presented model, but which are advisable to take into account when describing the interference of long waves of economic development, are indicated.
-
Utilizing multi-source real data for traffic flow optimization in CTraf
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 147-159The problem of optimal control of traffic flow in an urban road network is considered. The control is carried out by varying the duration of the working phases of traffic lights at controlled intersections. A description of the control system developed is given. The control system enables the use of three types of control: open-loop, feedback and manual. In feedback control, road infrastructure detectors, video cameras, inductive loop and radar detectors are used to determine the quantitative characteristics of current traffic flow state. The quantitative characteristics of the traffic flows are fed into a mathematical model of the traffic flow, implemented in the computer environment of an automatic traffic flow control system, in order to determine the moments for switching the working phases of the traffic lights. The model is a system of finite-difference recurrent equations and describes the change in traffic flow on each road section at each time step, based on retrived data on traffic flow characteristics in the network, capacity of maneuvers and flow distribution through alternative maneuvers at intersections. The model has scaling and aggregation properties. The structure of the model depends on the structure of the graph of the controlled road network. The number of nodes in the graph is equal to the number of road sections in the considered network. The simulation of traffic flow changes in real time makes it possible to optimally determine the duration of traffic light operating phases and to provide traffic flow control with feedback based on its current state. The system of automatic collection and processing of input data for the model is presented. In order to model the states of traffic flow in the network and to solve the problem of optimal traffic flow control, the CTraf software package has been developed, a brief description of which is given in the paper. An example of the solution of the optimal control problem of traffic flows on the basis of real data in the road network of Moscow is given.
-
The New Use of Network Element in ATLAS Workload Management System
Computer Research and Modeling, 2015, v. 7, no. 6, pp. 1343-1349Views (last year): 2. Citations: 2 (RSCI).A crucial component of distributed computing systems is network infrastructure. While networking forms the backbone of such systems, it is often the invisible partner to storage and computing resources. We propose to integrate Network Elements directly into distributed systems through the workload management layer. There are many reasons for this approach. As the complexity and demand for distributed systems grow, it is important to use existing infrastructure efficiently. For example, one could use network performance measurements in the decision making mechanisms of workload management systems. New advanced technologies allow one to programmatically define network configuration, for example SDN — Software Defined Networks. We will describe how these methods are being used within the PanDA workload management system of the ATLAS collaboration.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




