All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Cloud interpretation of the entropy model for calculating the trip matrix
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 89-103As the population of cities grows, the need to plan for the development of transport infrastructure becomes more acute. For this purpose, transport modeling packages are created. These packages usually contain a set of convex optimization problems, the iterative solution of which leads to the desired equilibrium distribution of flows along the paths. One of the directions for the development of transport modeling is the construction of more accurate generalized models that take into account different types of passengers, their travel purposes, as well as the specifics of personal and public modes of transport that agents can use. Another important direction of transport models development is to improve the efficiency of the calculations performed. Since, due to the large dimension of modern transport networks, the search for a numerical solution to the problem of equilibrium distribution of flows along the paths is quite expensive. The iterative nature of the entire solution process only makes this worse. One of the approaches leading to a reduction in the number of calculations performed is the construction of consistent models that allow to combine the blocks of a 4-stage model into a single optimization problem. This makes it possible to eliminate the iterative running of blocks, moving from solving a separate optimization problem at each stage to some general problem. Early work has proven that such approaches provide equivalent solutions. However, it is worth considering the validity and interpretability of these methods. The purpose of this article is to substantiate a single problem, that combines both the calculation of the trip matrix and the modal choice, for the generalized case when there are different layers of demand, types of agents and classes of vehicles in the transport network. The article provides possible interpretations for the gauge parameters used in the problem, as well as for the dual factors associated with the balance constraints. The authors of the article also show the possibility of combining the considered problem with a block for determining network load into a single optimization problem.
-
Migration processes modelling: methods and tools (overview)
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1205-1232Migration has a significant impact on the shaping of the demographic structure of the territories population, the state of regional and local labour markets. As a rule, rapid change in the working-age population of any territory due to migration processes results in an imbalance in supply and demand on labour markets and a change in the demographic structure of the population. Migration is also to a large extent a reflection of socio-economic processes taking place in the society. Hence, the issues related to the study of migration factors, the direction, intensity and structure of migration flows, and the prediction of their magnitude are becoming topical issues these days.
Mathematical tools are often used to analyze, predict migration processes and assess their consequences, allowing for essentially accurate modelling of migration processes for different territories on the basis of the available statistical data. In recent years, quite a number of scientific papers on modelling internal and external migration flows using mathematical methods have appeared both in Russia and in foreign countries in recent years. Consequently, there has been a need to systematize the currently most commonly used methods and tools applied in migration modelling to form a coherent picture of the main trends and research directions in this field.
The presented review considers the main approaches to migration modelling and the main components of migration modelling methodology, i. e. stages, methods, models and model classification. Their comparative analysis was also conducted and general recommendations on the choice of mathematical tools for modelling were developed. The review contains two sections: migration modelling methods and migration models. The first section describes the main methods used in the model development process — econometric, cellular automata, system-dynamic, probabilistic, balance, optimization and cluster analysis. Based on the analysis of modern domestic and foreign publications on migration, the most common classes of models — regression, agent-based, simulation, optimization, probabilistic, balance, dynamic and combined — were identified and described. The features, advantages and disadvantages of different types of migration process models were considered.
-
Proof of the connection between the Backman model with degenerate cost functions and the model of stable dynamics
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 335-342Since 1950s the field of city transport modelling has progressed rapidly. The first equilibrium distribution models of traffic flow appeared. The most popular model (which is still being widely used) was the Beckmann model, based on the two Wardrop principles. The core of the model could be briefly described as the search for the Nash equilibrium in a population demand game, in which losses of agents (drivers) are calculated based on the chosen path and demands of this path with correspondences being fixed. The demands (costs) of a path are calculated as the sum of the demands of different path segments (graph edges), that are included in the path. The costs of an edge (edge travel time) are determined by the amount of traffic on this edge (more traffic means larger travel time). The flow on a graph edge is determined by the sum of flows over all paths passing through the given edge. Thus, the cost of traveling along a path is determined not only by the choice of the path, but also by the paths other drivers have chosen. Thus, it is a standard game theory task. The way cost functions are constructed allows us to narrow the search for equilibrium to solving an optimization problem (game is potential in this case). If the cost functions are monotone and non-decreasing, the optimization problem is convex. Actually, different assumptions about the cost functions form different models. The most popular model is based on the BPR cost function. Such functions are massively used in calculations of real cities. However, in the beginning of the XXI century, Yu. E. Nesterov and A. de Palma showed that Beckmann-type models have serious weak points. Those could be fixed using the stable dynamics model, as it was called by the authors. The search for equilibrium here could be also reduced to an optimization problem, moreover, the problem of linear programming. In 2013, A.V.Gasnikov discovered that the stable dynamics model can be obtained by a passage to the limit in the Beckmann model. However, it was made only for several practically important, but still special cases. Generally, the question if this passage to the limit is possible remains open. In this paper, we provide the justification of the possibility of the above-mentioned passage to the limit in the general case, when the cost function for traveling along the edge as a function of the flow along the edge degenerates into a function equal to fixed costs until the capacity is reached and it is equal to plus infinity when the capacity is exceeded.
-
Signal and noise calculation at Rician data analysis by means of combining maximum likelihood technique and method of moments
Computer Research and Modeling, 2018, v. 10, no. 4, pp. 511-523Views (last year): 11.The paper develops a new mathematical method of the joint signal and noise calculation at the Rice statistical distribution based on combing the maximum likelihood method and the method of moments. The calculation of the sough-for values of signal and noise is implemented by processing the sampled measurements of the analyzed Rician signal’s amplitude. The explicit equations’ system has been obtained for required signal and noise parameters and the results of its numerical solution are provided confirming the efficiency of the proposed technique. It has been shown that solving the two-parameter task by means of the proposed technique does not lead to the increase of the volume of demanded calculative resources if compared with solving the task in one-parameter approximation. An analytical solution of the task has been obtained for the particular case of small value of the signal-to-noise ratio. The paper presents the investigation of the dependence of the sought for parameters estimation accuracy and dispersion on the quantity of measurements in experimental sample. According to the results of numerical experiments, the dispersion values of the estimated sought-for signal and noise parameters calculated by means of the proposed technique change in inverse proportion to the quantity of measurements in a sample. There has been implemented a comparison of the accuracy of the soughtfor Rician parameters’ estimation by means of the proposed technique and by earlier developed version of the method of moments. The problem having been considered in the paper is meaningful for the purposes of Rician data processing, in particular, at the systems of magnetic-resonance visualization, in devices of ultrasonic visualization, at optical signals’ analysis in range-measuring systems, at radar signals’ analysis, as well as at solving many other scientific and applied tasks that are adequately described by the Rice statistical model.
-
Modeling of the supply–demand imbalance in engineering labor market
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1249-1273Nowadays the situation of supply-demand imbalances in the professionals’ labor markets causes human capital losses as far as hampers scientific and innovation development. In Russia, supply-demand imbalances in the engineering labor market are associated with deindustrialization processes and manufacturing decline, resulted in a negative public perception of the engineering profession and high rates of graduates not working within the specialty or changing their occupation.
For analysis of the supply-demand imbalances in the engineering labor market, we elaborated a macroeconomic model. The model consists of 14 blocks, including blocks for demand and supply for engineers and technicians, along with the blocks for macroeconomic indicators as industry and service sector output, capital investment. Using this model, we forecasted the perspective supply-demand imbalances in the engineering labor market in a short-term period and examined the parameters of getting supply-demand balance in the medium-term perspective.
The results obtained show that situation of more balanced supply and demand for engineering labor is possible if there is simultaneous increase in the share of investments in fixed assets of manufacturing and relative wages in industry, besides getting to balance is facilitated by a decrease of the share of graduates not working by specialty. It is worth noting that a decrease in the share of graduates not working by specialty may be affected whether by the growth of relative wages in industry and number of vacancies or by the implementation of measures aimed at improving the working conditions of the engineering workforce and increasing the attractiveness of the profession. To summarize, in the case of the simplest scenario, not considering additional measures of working conditions improvement and increasing the attractiveness of the profession, the conditions of supply-demand balance achievement implies slightly lower growth rates of investment in industry than required in scenarios that involve increasing the share of engineers and technicians working in their specialty after graduation. The latter case, where a gradual decrease in the proportion of those who do not work in engineering specialty is expected, requires, probably, higher investment costs for attracting specialists and creating new jobs, as well as additional measures to strengthen the attractiveness of the engineering profession.
-
An Algorithm for Simulating the Banking Network System and Its Application for Analyzing Macroprudential Policy
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1275-1289Modeling banking systems using a network approach has received growing attention in recent years. One of the notable models is that developed by Iori et al, who proposed a banking system model for analyzing systemic risks in interbank networks. The model is built based on the simple dynamics of several bank balance sheet variables such as deposit, equity, loan, liquid asset, and interbank lending (or borrowing) in the form of difference equations. Each bank faces random shocks in deposits and loans. The balance sheet is updated at the beginning or end of each period. In the model, banks are grouped into either potential lenders or borrowers. The potential borrowers are those that have lack of liquidity and the potential lenders are those which have excess liquids after dividend payment and channeling new investment. The borrowers and the lenders are connected through the interbank market. Those borrowers have some percentage of linkage to random potential lenders for borrowing funds to maintain their safety net of the liquidity. If the demand for borrowing funds can meet the supply of excess liquids, then the borrower bank survives. If not, they are deemed to be in default and will be removed from the banking system. However, in their paper, most part of the interbank borrowing-lending mechanism is described qualitatively rather than by detailed mathematical or computational analysis. Therefore, in this paper, we enhance the mathematical parts of borrowing-lending in the interbank market and present an algorithm for simulating the model. We also perform some simulations to analyze the effects of the model’s parameters on banking stability using the number of surviving banks as the measure. We apply this technique to analyze the effects of a macroprudential policy called loan-to-deposit ratio based reserve requirement for banking stability.
-
The New Use of Network Element in ATLAS Workload Management System
Computer Research and Modeling, 2015, v. 7, no. 6, pp. 1343-1349Views (last year): 2. Citations: 2 (RSCI).A crucial component of distributed computing systems is network infrastructure. While networking forms the backbone of such systems, it is often the invisible partner to storage and computing resources. We propose to integrate Network Elements directly into distributed systems through the workload management layer. There are many reasons for this approach. As the complexity and demand for distributed systems grow, it is important to use existing infrastructure efficiently. For example, one could use network performance measurements in the decision making mechanisms of workload management systems. New advanced technologies allow one to programmatically define network configuration, for example SDN — Software Defined Networks. We will describe how these methods are being used within the PanDA workload management system of the ATLAS collaboration.
-
The application of genetic algorithms for organizational systems’ management in case of emergency
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 533-556Views (last year): 31.Optimal management of fuel supply system boils down to choosing an energy development strategy which provides consumers with the most efficient and reliable fuel and energy supply. As a part of the program on switching the heat supply distributed management system of the Udmurt Republic to renewable energy sources, an “Information-analytical system of regional alternative fuel supply management” was developed. The paper presents the mathematical model of optimal management of fuel supply logistic system consisting of three interconnected levels: raw material accumulation points, fuel preparation points and fuel consumption points, which are heat sources. In order to increase effective the performance of regional fuel supply system a modification of information-analytical system and extension of its set of functions using the methods of quick responding when emergency occurs are required. Emergencies which occur on any one of these levels demand the management of the whole system to reconfigure. The paper demonstrates models and algorithms of optimal management in case of emergency involving break down of such production links of logistic system as raw material accumulation points and fuel preparation points. In mathematical models, the target criterion is minimization of costs associated with the functioning of logistic system in case of emergency. The implementation of the developed algorithms is based on the usage of genetic optimization algorithms, which made it possible to obtain a more accurate solution in less time. The developed models and algorithms are integrated into the information-analytical system that enables to provide effective management of alternative fuel supply of the Udmurt Republic in case of emergency.
-
Combining the agent approach and the general equilibrium approach to analyze the influence of the shadow sector on the Russian economy
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 669-684This article discusses the influence of the shadow, informal and household sectors on the dynamics of a stochastic model with heterogeneous (heterogeneous) agents. The study uses the integration of the general equilibrium approach to explain the behavior of demand, supply and prices in an economy with several interacting markets, and a multi-agent approach. The analyzed model describes an economy with aggregated uncertainty and with an infinite number of heterogeneous agents (households). The source of heterogeneity is the idiosyncratic income shocks of agents in the legal and shadow sectors of the economy. In the analysis, an algorithm is used to approximate the dynamics of the distribution function of the capital stocks of individual agents — the dynamics of its first and second moments. The synthesis of the agent approach and the general equilibrium approach is carried out using computer implementation of the recursive feedback between microagents and macroenvironment. The behavior of the impulse response functions of the main variables of the model confirms the positive influence of the shadow economy (below a certain limit) on minimizing the rate of decline in economic indicators during recessions, especially for developing economies. The scientific novelty of the study is the combination of a multi-agent approach and a general equilibrium approach for modeling macroeconomic processes at the regional and national levels. Further research prospects may be associated with the use of more detailed general equilibrium models, which allow, in particular, to describe the behavior of heterogeneous groups of agents in the entrepreneurial sector of the economy.
-
Study of the dynamics of the structure of oligopolistic markets with non-market opposition parties
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 219-233The article examines the impact of non-market actions of participants in oligopolistic markets on the market structure. The following actions of one of the market participants aimed at increasing its market share are analyzed: 1) price manipulation; 2) blocking investments of stronger oligopolists; 3) destruction of produced products and capacities of competitors. Linear dynamic games with a quadratic criterion are used to model the strategies of oligopolists. The expediency of their use is due to the possibility of both an adequate description of the evolution of markets and the implementation of two mutually complementary approaches to determining the strategies of oligopolists: 1) based on the representation of models in the state space and the solution of generalized Riccati equations; 2) based on the application of operational calculus methods (in the frequency domain) which owns the visibility necessary for economic analysis.
The article shows the equivalence of approaches to solving the problem with maximin criteria of oligopolists in the state space and in the frequency domain. The results of calculations are considered in relation to a duopoly, with indicators close to one of the duopolies in the microelectronic industry of the world. The second duopolist is less effective from the standpoint of costs, though more mobile. Its goal is to increase its market share by implementing the non-market methods listed above.
Calculations carried out with help of the game model, made it possible to construct dependencies that characterize the relationship between the relative increase in production volumes over a 25-year period of weak and strong duopolists under price manipulation. Constructed dependencies show that an increase in the price for the accepted linear demand function leads to a very small increase in the production of a strong duopolist, but, simultaneously, to a significant increase in this indicator for a weak one.
Calculations carried out with use of the other variants of the model, show that blocking investments, as well as destroying the products of a strong duopolist, leads to more significant increase in the production of marketable products for a weak duopolist than to a decrease in this indicator for a strong one.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"