All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Modeling of population dynamics employed in the economic sectors: agent-oriented approach
Computer Research and Modeling, 2018, v. 10, no. 6, pp. 919-937Views (last year): 34.The article deals with the modeling of the number of employed population by branches of the economy at the national and regional levels. The lack of targeted distribution of workers in a market economy requires the study of systemic processes in the labor market that lead to different dynamics of the number of employed in the sectors of the economy. In this case, personal strategies for choosing labor activity by economic agents become important. The presence of different strategies leads to the emergence of strata in the labor market with a dynamically changing number of employees, unevenly distributed among the sectors of the economy. As a result, non-linear fluctuations in the number of employed population can be observed, the toolkit of agentbased modeling is relevant for the study of the fluctuations. In the article, we examined in-phase and anti-phase fluctuations in the number of employees by economic activity on the example of the Jewish Autonomous Region in Russia. The fluctuations found in the time series of statistical data for 2008–2016. We show that such fluctuations appear by age groups of workers. In view of this, we put forward a hypothesis that the agent in the labor market chooses a place of work by a strategy, related with his age group. It directly affects the distribution of the number of employed for different cohorts and the total number of employed in the sectors of the economy. The agent determines the strategy taking into account the socio-economic characteristics of the branches of the economy (different levels of wages, working conditions, prestige of the profession). We construct a basic agentoriented model of a three-branch economy to test the hypothesis. The model takes into account various strategies of economic agents, including the choice of the highest wages, the highest prestige of the profession and the best working conditions by the agent. As a result of numerical experiments, we show that the availability of various industry selection strategies and the age preferences of employers within the industry lead to periodic and complex dynamics of the number of different-aged employees. Age preferences may be a consequence, for example, the requirements of employer for the existence of work experience and education. Also, significant changes in the age structure of the employed population may result from migration.
-
Nonextensive Tsallis statistics of contract system of prime contractors and subcontractors in defense industry
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1163-1183In this work, we analyze the system of contracts made by Russian defense enterprises in the process of state defense order execution. We conclude that methods of statistical mechanics can be applied to the description of the given system. Following the original grand-canonical ensemble approach, we can create the statistical ensemble under investigation as a set of instant snapshots of indistinguishable contracts having individual values. We show that due to government regulations of contract prices the contract system can be described in terms of nonextensive Tsallis statistics. We have found that probability distributions of contract prices correspond to deformed Bose – Einstein distributions obtained using nonextensive Tsallis entropy. This conclusion is true both in the case of the whole set of contracts and in the case of the contracts made by an individual defense company as a seller.
In order to analyze how deformed Bose – Einstein distributions fit the empirical contract price distributions we compare the corresponding cumulative distribution functions. We conclude that annual distributions of individual sales which correspond to each company’s contract (order) can be used as relevant data for contract price distributions analysis. The empirical cumulative distribution functions for the individual sales ranking of Concern CSRI Elektropribor, one of the leading Russian defense companies, are analyzed for the period 2007–2021. The theoretical cumulative distribution functions, obtained using deformed Bose – Einstein distributions in the case of «rare contract gas» limit, fit well to the empirical cumulative distribution functions. The fitted values for the entropic index show that the degree of nonextensivity of the system under investigations is rather high. It is shown that the characteristic prices of distributions can be estimated by weighing the values of annual individual sales with the escort probabilities. Given that the fitted values of chemical potential are equal to zero, we suggest that «gas of contracts» can be compared to photon gas in which the number of particles is not conserved.
-
Development of and research on an algorithm for distinguishing features in Twitter publications for a classification problem with known markup
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 171-183Social media posts play an important role in demonstration of financial market state, and their analysis is a powerful tool for trading. The article describes the result of a study of the impact of social media activities on the movement of the financial market. The top authoritative influencers are selected. Twitter posts are used as data. Such texts usually include slang and abbreviations, so methods for preparing primary text data, including Stanza, regular expressions are presented. Two approaches to the representation of a point in time in the format of text data are considered. The difference of the influence of a single tweet or a whole package consisting of tweets collected over a certain period of time is investigated. A statistical approach in the form of frequency analysis is also considered, metrics defined by the significance of a particular word when identifying the relationship between price changes and Twitter posts are introduced. Frequency analysis involves the study of the occurrence distributions of various words and bigrams in the text for positive, negative or general trends. To build the markup, changes in the market are processed into a binary vector using various parameters, thus setting the task of binary classification. The parameters for Binance candlesticks are sorted out for better description of the movement of the cryptocurrency market, their variability is also explored in this article. Sentiment is studied using Stanford Core NLP. The result of statistical analysis is relevant to feature selection for further binary or multiclass classification tasks. The presented methods of text analysis contribute to the increase of the accuracy of models designed to solve natural language processing problems by selecting words, improving the quality of vectorization. Such algorithms are often used in automated trading strategies to predict the price of an asset, the trend of its movement.
-
Views (last year): 3.
Storage is the essential and expensive part of cloud computation both from the point of view of network requirements and data access organization. So the choice of storage architecture can be crucial for any application. In this article we can look at the types of cloud architectures for data processing and data storage based on the proven technology of enterprise storage. The advantage of cloud computing is the ability to virtualize and share resources among different applications for better server utilization. We are discussing and evaluating distributed data processing, database architectures for cloud computing and database query in the local network and for real time conditions.
-
Development of and research on machine learning algorithms for solving the classification problem in Twitter publications
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 185-195Posts on social networks can both predict the movement of the financial market, and in some cases even determine its direction. The analysis of posts on Twitter contributes to the prediction of cryptocurrency prices. The specificity of the community is represented in a special vocabulary. Thus, slang expressions and abbreviations are used in posts, the presence of which makes it difficult to vectorize text data, as a result of which preprocessing methods such as Stanza lemmatization and the use of regular expressions are considered. This paper describes created simplest machine learning models, which may work despite such problems as lack of data and short prediction timeframe. A word is considered as an element of a binary vector of a data unit in the course of the problem of binary classification solving. Basic words are determined according to the frequency analysis of mentions of a word. The markup is based on Binance candlesticks with variable parameters for a more accurate description of the trend of price changes. The paper introduces metrics that reflect the distribution of words depending on their belonging to a positive or negative classes. To solve the classification problem, we used a dense model with parameters selected by Keras Tuner, logistic regression, a random forest classifier, a naive Bayesian classifier capable of working with a small sample, which is very important for our task, and the k-nearest neighbors method. The constructed models were compared based on the accuracy metric of the predicted labels. During the investigation we recognized that the best approach is to use models which predict price movements of a single coin. Our model deals with posts that mention LUNA project, which no longer exist. This approach to solving binary classification of text data is widely used to predict the price of an asset, the trend of its movement, which is often used in automated trading.
-
Models of soil organic matter dynamics: problems and perspectives
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 391-399Soil as a complex multifunctional open system is one of the most difficult object for modeling. In spite of serious achievements in the soil system modeling, existed models do not reflect all aspects and processes of soil organic matter mineralization and humification. The problems and “hot spots” in the modeling of the dynamics of soil organic matter and biophylous elements were identified on a base of creation and wide implementation of ROMUL and EFIMOD models. The following aspects are discussed: further theoretical background; improving the structure of models; preparation and uncertainty of the initial data; inclusion of all soil biota (microorganisms, micro- and meso-fauna) as factors of humification; impact of soil mineralogy on C and N dynamics; hydro-thermal regime and organic matter distribution in whole soil profile; vertical and horizontal migration of soil organic matter. An effective feedback from modellers to experimentalists is necessary to solve the listed problems.
Keywords: mathematic model, soil organic matter.Views (last year): 2. Citations: 3 (RSCI). -
Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.
-
Efficient processing and classification of wave energy spectrum data with a distributed pipeline
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 517-520Views (last year): 3. Citations: 2 (RSCI).Processing of large amounts of data often consists of several steps, e.g. pre- and post-processing stages, which are executed sequentially with data written to disk after each step, however, when pre-processing stage for each task is different the more efficient way of processing data is to construct a pipeline which streams data from one stage to another. In a more general case some processing stages can be factored into several parallel subordinate stages thus forming a distributed pipeline where each stage can have multiple inputs and multiple outputs. Such processing pattern emerges in a problem of classification of wave energy spectra based on analytic approximations which can extract different wave systems and their parameters (e.g. wave system type, mean wave direction) from spectrum. Distributed pipeline approach achieves good performance compared to conventional “sequential-stage” processing.
-
An interactive tool for developing distributed telemedicine systems
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 521-527Views (last year): 3. Citations: 4 (RSCI).Getting a qualified medical examination can be difficult for people in remote areas because medical staff available can either be inaccessible or it might lack expert knowledge at proper level. Telemedicine technologies can help in such situations. On one hand, such technologies allow highly qualified doctors to consult remotely, thereby increasing the quality of diagnosis and plan treatment. On the other hand, computer-aided analysis of the research results, anamnesis and information on similar cases assist medical staff in their routine activities and decision-making.
Creating telemedicine system for a particular domain is a laborious process. It’s not sufficient to pick proper medical experts and to fill the knowledge base of the analytical module. It’s also necessary to organize the entire infrastructure of the system to meet the requirements in terms of reliability, fault tolerance, protection of personal data and so on. Tools with reusable infrastructure elements, which are common to such systems, are able to decrease the amount of work needed for the development of telemedicine systems.
An interactive tool for creating distributed telemedicine systems is described in the article. A list of requirements for the systems is presented; structural solutions for meeting the requirements are suggested. A composition of such elements applicable for distributed systems is described in the article. A cardiac telemedicine system is described as a foundation of the tool
-
The Tier-1 resource center at the National Research Centre “Kurchatov Institute” for the experiments, ALICE, ATLAS and LHCb at the Large Hadron Collider (LHC)
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 621-630Views (last year): 2.The review of the distributed computing infrastructure of the Tier-1 sites for the Alice, ATLAS, LHCb experiments at the LHC is given. The special emphasis is placed on the main tasks and services of the Tier-1 site, which operates in the Kurchatov Institute in Moscow.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"