All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
The 3rd BRICS Mathematics Conference
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1015-1016 -
The error accumulation in the conjugate gradient method for degenerate problem
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 459-472In this paper, we consider the conjugate gradient method for solving the problem of minimizing a quadratic function with additive noise in the gradient. Three concepts of noise were considered: antagonistic noise in the linear term, stochastic noise in the linear term and noise in the quadratic term, as well as combinations of the first and second with the last. It was experimentally obtained that error accumulation is absent for any of the considered concepts, which differs from the folklore opinion that, as in accelerated methods, error accumulation must take place. The paper gives motivation for why the error may not accumulate. The dependence of the solution error both on the magnitude (scale) of the noise and on the size of the solution using the conjugate gradient method was also experimentally investigated. Hypotheses about the dependence of the error in the solution on the noise scale and the size (2-norm) of the solution are proposed and tested for all the concepts considered. It turned out that the error in the solution (by function) linearly depends on the noise scale. The work contains graphs illustrating each individual study, as well as a detailed description of numerical experiments, which includes an account of the methods of noise of both the vector and the matrix.
-
Automated citation graph building from a corpora of scientific documents
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 707-719Views (last year): 5. Citations: 1 (RSCI).In this paper the problem of automated building of a citation graph from a collection of scientific documents is considered as a sequence of machine learning tasks. The overall data processing technology is described which consists of six stages: preprocessing, metainformation extraction, bibliography lists extraction, splitting bibliography lists into separate bibliography records, standardization of each bibliography record, and record linkage. The goal of this paper is to provide a survey of approaches and algorithms suitable for each stage, motivate the choice of the best combination of algorithms, and adapt some of them for multilingual bibliographies processing. For some of the tasks new algorithms and heuristics are proposed and evaluated on the mixed English and Russian documents corpora.
-
The motivation method in the Germeyer’s games at modeling three-level control system of the ship’s ballast water
Computer Research and Modeling, 2014, v. 6, no. 4, pp. 535-542Citations: 5 (RSCI).The static three-level game-theoretic model of three-level control system of the ship’s water ballast is built. The methods of hierarchical control in view of requirements of keeping the system in the given state are used. A comparison of the results of study of the model in terms of $\Gamma_1$ and $\Gamma_2$ Germeyer’s games is conducted. Numerical calculations for some typical cases are given.
-
Speeding up the two-stage simultaneous traffic assignment model
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 343-355This article describes possible improvements for the simultaneous multi-stage transport model code for speeding up computations and improving the model detailing. The model consists of two blocks, where the first block is intended to calculate the correspondence matrix, and the second block computes the equilibrium distribution of traffic flows along the routes. The first block uses a matrix of transport costs that calculates a matrix of correspondences. It describes the costs (time in our case) of travel from one area to another. The second block presents how exactly the drivers (agents) are distributed along the possible paths. So, knowing the distribution of the flows along the paths, it is possible to calculate the cost matrix. Equilibrium in a two-stage traffic flow model is a fixed point of a sequence of the two described models. Thus, in this paper we report an attempt to influence the calculation speed of Dijkstra’s algorithm part of the model. It is used to calculate the shortest path from one point to another, which should be re-calculated after each iteration of the flow distribution part. We also study and implement the road pricing in the model code, as well as we replace the Sinkhorn algorithm in the calculation of the correspondence matrix part with its faster implementation. In the beginning of the paper, we provide a short theoretical overview of the transport modelling motivation; we discuss current approaches to the modelling and provide an example for demonstration of how the whole cycle of multi-stage transport modelling works.
-
Hypergeometric functions in model of General equilibrium of multisector economy with monopolistic competition
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 825-836Views (last year): 10.We show that basic properties of some models of monopolistic competition are described using families of hypergeometric functions. The results obtained by building a general equilibrium model in a multisector economy producing a differentiated good in $n$ high-tech sectors in which single-product firms compete monopolistically using the same technology. Homogeneous (traditional) sector is characterized by perfect competition. Workers are motivated to find a job in high-tech sectors as wages are higher there. However, they are at risk to remain unemployed. Unemployment persists in equilibrium by labor market imperfections. Wages are set by firms in high-tech sectors as a result of negotiations with employees. It is assumed that individuals are homogeneous consumers with identical preferences that are given the separable utility function of general form. In the paper the conditions are found such that the general equilibrium in the model exists and is unique. The conditions are formulated in terms of the elasticity of substitution $\mathfrak{S}$ between varieties of the differentiated good which is averaged over all consumers. The equilibrium found is symmetrical with respect to the varieties of differentiated good. The equilibrium variables can be represented as implicit functions which properties are associated elasticity $\mathfrak{S}$ introduced by the authors. A complete analytical description of the equilibrium variables is possible for known special cases of the utility function of consumers, for example, in the case of degree functions, which are incorrect to describe the response of the economy to changes in the size of the markets. To simplify the implicit function, we introduce a utility function defined by two one-parameter families of hypergeometric functions. One of the families describes the pro-competitive, and the other — anti-competitive response of prices to an increase in the size of the economy. A parameter change of each of the families corresponds to all possible values of the elasticity $\mathfrak{S}$. In this sense, the hypergeometric function exhaust natural utility function. It is established that with the increase in the elasticity of substitution between the varieties of the differentiated good the difference between the high-tech and homogeneous sectors is erased. It is shown that in the case of large size of the economy in equilibrium individuals consume a small amount of each product as in the case of degree preferences. This fact allows to approximate the hypergeometric functions by the sum of degree functions in a neighborhood of the equilibrium values of the argument. Thus, the change of degree utility functions by hypergeometric ones approximated by the sum of two power functions, on the one hand, retains all the ability to configure parameters and, on the other hand, allows to describe the effects of change the size of the sectors of the economy.
-
Assessing the impact of deposit benchmark interest rate on banking loan dynamics
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 1023-1032Deposit benchmark interest rates are a policy implemented by banking regulators to calculate the interest rates offered to depositors, maintaining equitable and competitive rates within the financial industry. It functions as a benchmark for determining the pricing of different banking products, expenses, and financial choices. The benchmark rate will have a direct impact on the amount of money deposited, which in turn will determine the amount of money available for lending.We are motivated to analyze the influence of deposit benchmark interest rates on the dynamics of banking loans. This study examines the issue using a difference equation of banking loans. In this process, the decision on the loan amount in the next period is influenced by both the present loan volume and the information on its marginal profit. An analysis is made of the loan equilibrium point and its stability. We also analyze the bifurcations that arise in the model. To ensure a stable banking loan, it is necessary to set the benchmark rate higher than the flip value and lower than the transcritical bifurcation values. The confirmation of this result is supported by the bifurcation diagram and its associated Lyapunov exponent. Insufficient deposit benchmark interest rates might lead to chaotic dynamics in banking lending. Additionally, a bifurcation diagram with two parameters is also shown. We do numerical sensitivity analysis by examining contour plots of the stability requirements, which vary with the deposit benchmark interest rate and other parameters. In addition, we examine a nonstandard difference approach for the previous model, assess its stability, and make a comparison with the standard model. The outcome of our study can provide valuable insights to the banking regulator in making informed decisions regarding deposit benchmark interest rates, taking into account several other banking factors.
-
Mathematical models of combat and military operations
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 217-242Simulation of combat and military operations is the most important scientific and practical task aimed at providing the command of quantitative bases for decision-making. The first models of combat were developed during the First World War (M. Osipov, F. Lanchester), and now they are widely used in connection with the massive introduction of automation tools. At the same time, the models of combat and war do not fully take into account the moral potentials of the parties to the conflict, which motivates and motivates the further development of models of battle and war. A probabilistic model of combat is considered, in which the parameter of combat superiority is determined through the parameter of moral (the ratio of the percentages of the losses sustained by the parties) and the parameter of technological superiority. To assess the latter, the following is taken into account: command experience (ability to organize coordinated actions), reconnaissance, fire and maneuverability capabilities of the parties and operational (combat) support capabilities. A game-based offensive-defense model has been developed, taking into account the actions of the first and second echelons (reserves) of the parties. The target function of the attackers in the model is the product of the probability of a breakthrough by the first echelon of one of the defense points by the probability of the second echelon of the counterattack repelling the reserve of the defenders. Solved the private task of managing the breakthrough of defense points and found the optimal distribution of combat units between the trains. The share of troops allocated by the parties to the second echelon (reserve) increases with an increase in the value of the aggregate combat superiority parameter of those advancing and decreases with an increase in the value of the combat superiority parameter when repelling a counterattack. When planning a battle (battles, operations) and the distribution of its troops between echelons, it is important to know not the exact number of enemy troops, but their capabilities and capabilities, as well as the degree of preparedness of the defense, which does not contradict the experience of warfare. Depending on the conditions of the situation, the goal of an offensive may be to defeat the enemy, quickly capture an important area in the depth of the enemy’s defense, minimize their losses, etc. For scaling the offensive-defense model for targets, the dependencies of the losses and the onset rate on the initial ratio of the combat potentials of the parties were found. The influence of social costs on the course and outcome of wars is taken into account. A theoretical explanation is given of a loss in a military company with a technologically weak adversary and with a goal of war that is unclear to society. To account for the influence of psychological operations and information wars on the moral potential of individuals, a model of social and information influence was used.
-
Modelling interregional migration flows by the cellular automata
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1467-1483The article dwells upon investigating the issue of the most adequate tools developing and justifying to forecast the interregional migration flows value and structure. Migration processes have a significant impact on the size and demographic structure of the population of territories, the state and balance of regional and local labor markets.
To analyze the migration processes and to assess their impact an economic-mathematical tool is required which would be instrumental in modelling the migration processes and flows for different areas with the desired precision. The current methods and approaches to the migration processes modelling, including the analysis of their advantages and disadvantages, were considered. It is noted that to implement many of these methods mass aggregated statistical data is required which is not always available and doesn’t characterize the migrants behavior at the local level where the decision to move to a new dwelling place is made. This has a significant impact on the ability to apply appropriate migration processes modelling techniques and on the projection accuracy of the migration flows magnitude and structure.
The cellular automata model for interregional migration flows modelling, implementing the integration of the households migration behavior model under the conditions of the Bounded Rationality into the general model of the area migration flow was developed and tested based on the Primorye Territory data. To implement the households migration behavior model under the conditions of the Bounded Rationality the integral attractiveness index of the regions with economic, social and ecological components was proposed in the work.
To evaluate the prognostic capacity of the developed model, it was compared with the available cellular automata models used to predict interregional migration flows. The out of sample prediction method which showed statistically significant superiority of the proposed model was applied for this purpose. The model allows obtaining the forecasts and quantitative characteristics of the areas migration flows based on the households real migration behaviour at the local level taking into consideration their living conditions and behavioural motives.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"