All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Analysing the impact of migration on background social strain using a continuous social stratification model
Computer Research and Modeling, 2022, v. 14, no. 3, pp. 661-673The background social strain of a society can be quantitatively estimated using various statistical indicators. Mathematical models, allowing to forecast the dynamics of social strain, are successful in describing various social processes. If the number of interacting groups is small, the dynamics of the corresponding indicators can be modelled with a system of ordinary differential equations. The increase in the number of interacting components leads to the growth of complexity, which makes the analysis of such models a challenging task. A continuous social stratification model can be considered as a result of the transition from a discrete number of interacting social groups to their continuous distribution in some finite interval. In such a model, social strain naturally spreads locally between neighbouring groups, while in reality, the social elite influences the whole society via news media, and the Internet allows non-local interaction between social groups. These factors, however, can be taken into account to some extent using the term of the model, describing negative external influence on the society. In this paper, we develop a continuous social stratification model, describing the dynamics of two societies connected through migration. We assume that people migrate from the social group of donor society with the highest strain level to poorer social layers of the acceptor society, transferring the social strain at the same time. We assume that all model parameters are constants, which is a realistic assumption for small societies only. By using the finite volume method, we construct the spatial discretization for the problem, capable of reproducing finite propagation speed of social strain. We verify the discretization by comparing the results of numerical simulations with the exact solutions of the auxiliary non-linear diffusion equation. We perform the numerical analysis of the proposed model for different values of model parameters, study the impact of migration intensity on the stability of acceptor society, and find the destabilization conditions. The results, obtained in this work, can be used in further analysis of the model in the more realistic case of inhomogeneous coefficients.
-
Experimental comparison of PageRank vector calculation algorithms
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 369-379Finding PageRank vector is of great scientific and practical interest due to its applicability to modern search engines. Despite the fact that this problem is reduced to finding the eigenvector of the stochastic matrix $P$, the need for new algorithms is justified by a large size of the input data. To achieve no more than linear execution time, various randomized methods have been proposed, returning the expected result only with some probability close enough to one. We will consider two of them by reducing the problem of calculating the PageRank vector to the problem of finding equilibrium in an antagonistic matrix game, which is then solved using the Grigoriadis – Khachiyan algorithm. This implementation works effectively under the assumption of sparsity of the input matrix. As far as we know, there are no successful implementations of neither the Grigoriadis – Khachiyan algorithm nor its application to the task of calculating the PageRank vector. The purpose of this paper is to fill this gap. The article describes an algorithm giving pseudocode and some details of the implementation. In addition, it discusses another randomized method of calculating the PageRank vector, namely, Markov chain Monte Carlo (MCMC), in order to compare the results of these algorithms on matrices with different values of the spectral gap. The latter is of particular interest, since the magnitude of the spectral gap strongly affects the convergence rate of MCMC and does not affect the other two approaches at all. The comparison was carried out on two types of generated graphs: chains and $d$-dimensional cubes. The experiments, as predicted by the theory, demonstrated the effectiveness of the Grigoriadis – Khachiyan algorithm in comparison with MCMC for sparse graphs with a small spectral gap value. The written code is publicly available, so everyone can reproduce the results themselves or use this implementation for their own needs. The work has a purely practical orientation, no theoretical results were obtained.
-
Simple behavioral model of imprint formation
Computer Research and Modeling, 2014, v. 6, no. 5, pp. 793-802Views (last year): 5. Citations: 2 (RSCI).Formation of adequate behavioral patterns in condition of the unknown environment carried out through exploratory behavior. At the same time the rapid formation of an acceptable pattern is more preferable than a long elaboration perfect pattern through repeat play learning situation. In extreme situations, phenomenon of imprinting is observed — instant imprinting of behavior pattern, which ensure the survival of individuals. In this paper we propose a hypothesis and imprint model when trained on a single successful pattern of virtual robot's neural network demonstrates the effective functioning. Realism of the model is estimated by checking the stability of playback behavior pattern to perturbations situation imprint run.
-
The model of interference of long waves of economic development
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 649-663The article substantiates the need to develop and analyze mathematical models that take into account the mutual influence of long (Kondratiev) waves of economic development. The analysis of the available publications shows that at the model level, the direct and inverse relationships between intersecting long waves are still insufficiently studied. As practice shows, the production of the current long wave can receive an additional impetus for growth from the technologies of the next long wave. The technologies of the next industrial revolution often serve as improving innovations for the industries born of the previous industrial revolution. As a result, the new long wave increases the amplitude of the oscillations of the trajectory of the previous long wave. Such results of the interaction of long waves in the economy are similar to the effects of interference of physical waves. The mutual influence of the recessions and booms of the economies of different countries gives even more grounds for comparing the consequences of this mutual influence with the interference of physical waves. The article presents a model for the development of the technological base of production, taking into account the possibilities of combining old and new technologies. The model consists of several sub-models. The use of a different mathematical description for the individual stages of updating the technological base of production allows us to take into account the significant differences between the successive phases of the life cycle of general purpose technologies, considered in modern literature as the technological basis of industrial revolutions. One of these phases is the period of formation of the appropriate infrastructure necessary for the intensive diffusion of new general purpose technology, for the rapid development of industries using this technology. The model is used for illustrative calculations with the values of exogenous parameters corresponding to the logic of changing long waves. Despite all the conditionality of the illustrative calculations, the configuration of the curve representing the change in the return on capital in the simulated period is close to the configuration of the real trajectory of the return on private fixed assets of the US economy in the period 1982-2019. The factors that remained outside the scope of the presented model, but which are advisable to take into account when describing the interference of long waves of economic development, are indicated.
-
Mathematical features of individual dosimetric planning of radioiodotherapy based on pharmacokinetic modeling
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 773-784When determining therapeutic absorbed doses in the process of radioiodine therapy, the method of individual dosimetric planning is increasingly used in Russian medicine. However, for the successful implementation of this method, it is necessary to have appropriate software that allows modeling the pharmacokinetics of radioiodine in the patient’s body and calculate the necessary therapeutic activity of a radiopharmaceutical drug to achieve the planned therapeutic absorbed dose in the thyroid gland.
Purpose of the work: development of a software package for pharmacokinetic modeling and calculation of individual absorbed doses in radioiodine therapy based on a five-chamber model of radioiodine kinetics using two mathematical optimization methods. The work is based on the principles and methods of RFLP pharmacokinetics (chamber modeling). To find the minimum of the residual functional in identifying the values of the transport constants of the model, the Hook – Jeeves method and the simulated annealing method were used. Calculation of dosimetric characteristics and administered therapeutic activity is based on the method of calculating absorbed doses using the functions of radioiodine activity in the chambers found during modeling. To identify the parameters of the model, the results of radiometry of the thyroid gland and urine of patients with radioiodine introduced into the body were used.
A software package for modeling the kinetics of radioiodine during its oral intake has been developed. For patients with diffuse toxic goiter, the transport constants of the model were identified and individual pharmacokinetic and dosimetric characteristics (elimination half-lives, maximum thyroid activity and time to reach it, absorbed doses to critical organs and tissues, administered therapeutic activity) were calculated. The activity-time relationships for all cameras in the model are obtained and analyzed. A comparative analysis of the calculated pharmacokinetic and dosimetric characteristics calculated using two mathematical optimization methods was performed. Evaluation completed the stunning-effect and its contribution to the errors in calculating absorbed doses. From a comparative analysis of the pharmacokinetic and dosimetric characteristics calculated in the framework of two optimization methods, it follows that the use of a more complex mathematical method for simulating annealing in a software package does not lead to significant changes in the values of the characteristics compared to the simple Hook – Jeeves method. Errors in calculating absorbed doses in the framework of these mathematical optimization methods do not exceed the spread of absorbed dose values from the stunning-effect.
-
Extracting knowledge from text messages: overview and state-of-the-art
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.
-
Mathematical methods for stabilizing the structure of social systems under external disturbances
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 845-857The article considers a bilinear model of the influence of external disturbances on the stability of the structure of social systems. Approaches to the third-party stabilization of the initial system consisting of two groups are investigated — by reducing the initial system to a linear system with uncertain parameters and using the results of the theory of linear dynamic games with a quadratic criterion. The influence of the coefficients of the proposed model of the social system and the control parameters on the quality of the system stabilization is analyzed with the help of computer experiments. It is shown that the use of a minimax strategy by a third party in the form of feedback control leads to a relatively close convergence of the population of the second group (excited by external influences) to an acceptable level, even with unfavorable periodic dynamic perturbations.
The influence of one of the key coefficients in the criterion $(\varepsilon)$ used to compensate for the effects of external disturbances (the latter are present in the linear model in the form of uncertainty) on the quality of system stabilization is investigated. Using Z-transform, it is shown that a decrease in the coefficient $\varepsilon$ should lead to an increase in the values of the sum of the squares of the control. The computer calculations carried out in the article also show that the improvement of the convergence of the system structure to the equilibrium level with a decrease in this coefficient is achieved due to sharp changes in control in the initial period, which may induce the transition of some members of the quiet group to the second, excited group.
The article also examines the influence of the values of the model coefficients that characterize the level of social tension on the quality of management. Calculations show that an increase in the level of social tension (all other things being equal) leads to the need for a significant increase in the third party's stabilizing efforts, as well as the value of control at the transition period.
The results of the statistical modeling carried out in the article show that the calculated feedback controls successfully compensate for random disturbances on the social system (both in the form of «white» noise, and of autocorrelated disturbances).
-
Raising convergence order of grid-characteristic schemes for 2D linear elasticity problems using operator splitting
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 899-910The grid-characteristic method is successfully used for solving hyperbolic systems of partial differential equations (for example, transport / acoustic / elastic equations). It allows to construct correctly algorithms on contact boundaries and boundaries of the integration domain, to a certain extent to take into account the physics of the problem (propagation of discontinuities along characteristic curves), and has the property of monotonicity, which is important for considered problems. In the cases of two-dimensional and three-dimensional problems the method makes use of a coordinate splitting technique, which enables us to solve the original equations by solving several one-dimensional ones consecutively. It is common to use up to 3-rd order one-dimensional schemes with simple splitting techniques which do not allow for the convergence order to be higher than two (with respect to time). Significant achievements in the operator splitting theory were done, the existence of higher-order schemes was proved. Its peculiarity is the need to perform a step in the opposite direction in time, which gives rise to difficulties, for example, for parabolic problems.
In this work coordinate splitting of the 3-rd and 4-th order were used for the two-dimensional hyperbolic problem of the linear elasticity. This made it possible to increase the final convergence order of the computational algorithm. The paper empirically estimates the convergence in L1 and L∞ norms using analytical solutions of the system with the sufficient degree of smoothness. To obtain objective results, we considered the cases of longitudinal and transverse plane waves propagating both along the diagonal of the computational cell and not along it. Numerical experiments demonstrated the improved accuracy and convergence order of constructed schemes. These improvements are achieved with the cost of three- or fourfold increase of the computational time (for the 3-rd and 4-th order respectively) and no additional memory requirements. The proposed improvement of the computational algorithm preserves the simplicity of its parallel implementation based on the spatial decomposition of the computational grid.
-
OpenCL realization of some many-body potentials
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 549-558Views (last year): 4. Citations: 1 (RSCI).Modeling of carbon nanostructures by means of classical molecular dynamics requires a lot of computations. One of the ways to improve the performance of basic algorithms is to transform them for running on SIMD-type computing systems such as systems with dedicated GPU. In this work we describe the development of algorithms for computation of many-body interaction based on Tersoff and embedded-atom potentials by means of OpenCL technology. OpenCL standard provides universality and portability of the algorithms and can be successfully used for development of the software for heterogeneous computing systems. The performance of algorithms is evaluated on CPU and GPU hardware platforms. It is shown that concurrent memory writes is effective for Tersoff bond order potential. The same approach for embedded-atom potential is shown to be slower than algorithm without concurrent memory access. Performance evaluation shows a significant GPU acceleration of energy-force evaluation algorithms for many-body potentials in comparison to the corresponding serial implementations.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"