All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Development of and research on machine learning algorithms for solving the classification problem in Twitter publications
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 185-195Posts on social networks can both predict the movement of the financial market, and in some cases even determine its direction. The analysis of posts on Twitter contributes to the prediction of cryptocurrency prices. The specificity of the community is represented in a special vocabulary. Thus, slang expressions and abbreviations are used in posts, the presence of which makes it difficult to vectorize text data, as a result of which preprocessing methods such as Stanza lemmatization and the use of regular expressions are considered. This paper describes created simplest machine learning models, which may work despite such problems as lack of data and short prediction timeframe. A word is considered as an element of a binary vector of a data unit in the course of the problem of binary classification solving. Basic words are determined according to the frequency analysis of mentions of a word. The markup is based on Binance candlesticks with variable parameters for a more accurate description of the trend of price changes. The paper introduces metrics that reflect the distribution of words depending on their belonging to a positive or negative classes. To solve the classification problem, we used a dense model with parameters selected by Keras Tuner, logistic regression, a random forest classifier, a naive Bayesian classifier capable of working with a small sample, which is very important for our task, and the k-nearest neighbors method. The constructed models were compared based on the accuracy metric of the predicted labels. During the investigation we recognized that the best approach is to use models which predict price movements of a single coin. Our model deals with posts that mention LUNA project, which no longer exist. This approach to solving binary classification of text data is widely used to predict the price of an asset, the trend of its movement, which is often used in automated trading.
-
Development of a hybrid simulation model of the assembly shop
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1359-1379In the presented work, a hybrid optimal simulation model of an assembly shop in the AnyLogic environment has been developed, which allows you to select the parameters of production systems. To build a hybrid model of the investigative approach, discrete-event modeling and aggressive modeling are combined into a single model with an integrating interaction. Within the framework of this work, a mechanism for the development of a production system consisting of several participants-agents is described. An obvious agent corresponds to a class in which a set of agent parameters is specified. In the simulation model, three main groups of operations performed sequentially were taken into account, and the logic for working with rejected sets was determined. The product assembly process is a process that occurs in a multi-phase open-loop system of redundant service with waiting. There are also signs of a closed system — scrap flows for reprocessing. When creating a distribution system in the segment, it is mandatory to use control over the execution of requests in a FIFO queue. For the functional assessment of the production system, the simulation model includes several functional functions that describe the number of finished products, the average time of preparation of products, the number and percentage of rejects, the simulation result for the study, as well as functional variables in which the calculated utilization factors will be used. A series of modeling experiments were carried out in order to study the behavior of the agents of the system in terms of the overall performance indicators of the production system. During the experiment, it was found that the indicator of the average preparation time of the product is greatly influenced by such parameters as: the average speed of the set of products, the average time to complete operations. At a given limitation interval, we managed to select a set of parameters that managed to achieve the largest possible operation of the assembly line. This experiment implements the basic principle of agent-based modeling — decentralized agents make a personal contribution and affect the operation of the entire simulated system as a whole. As a result of the experiments, thanks to the selection of a large set of parameters, it was possible to achieve high performance indicators of the assembly shop, namely: to increase the productivity indicator by 60%; reduce the average assembly time of products by 38%.
-
Valuation of machines at the random process of their degradation and premature sales
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 797-815The model of the process of using machinery and equipment is considered, which takes into account the probabilistic nature of the process of their operation and sale. It takes into account the possibility of random hidden failures, after which the condition of the machine deteriorates abruptly, as well as the randomly arising need for premature (before the end of its service life) sale of the machine, which requires, generally speaking, random time. The model is focused on assessing the market value and service life of machines in accordance with International Valuation Standards. Strictly speaking, the market value of a used machine depends on its technical condition, but in practice, appraisers only take into account its age, since generally accepted measures of the technical condition of machines do not yet exist. As a result, the market value of a used machine is assumed to be equal to the average market value of similar machines of the corresponding age. For these purposes, appraisers use coefficients that reflect the influence of the age of machines on their market value. Such coefficients are not always justified and do not take into account either the degradation of the machine or the probabilistic nature of the process of its use. The proposed model is based on the anticipation of benefits principle. In it, we characterize the state of the machine by the intensity of the benefits it brings. The machine is subjected to a complex Poisson failure process, and after failure its condition abruptly worsens and may even reach its limit. Situations also arise that preclude further use of the machine by its owner. In such situations, the owner puts the machine up for sale before the end of its service life (prematurely), and the sale requires a random timing. The model allows us to take into account the influence of such situations and construct an analytical relationship linking the market value of a machine with its condition, and calculate the average coefficients of change in the market value of machines with age. At the same time, it is also possible to take into account the influence of inflation and the scrap cost of the machine. We have found that the rate of prematurely sales has a significant impact on the cost of new and used machines. The model also allows us to take into account the influence of inflation and the scrap value of the machine. We have found that the rate of premature sales has a significant impact on the service life and market value of new and used machines. At the same time, the dependence of the market value of machines on age is largely determined by the coefficient of variation of the service life of the machines. The results obtained allow us to obtain more reasonable estimates of the market value of machines, including for the purposes of the system of national accounts.
-
Changepoint detection in biometric data: retrospective nonparametric segmentation methods based on dynamic programming and sliding windows
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1295-1321This paper is dedicated to the analysis of medical and biological data obtained through locomotor training and testing of astronauts conducted both on Earth and during spaceflight. These experiments can be described as the astronaut’s movement on a treadmill according to a predefined regimen in various speed modes. During these modes, not only the speed is recorded but also a range of parameters, including heart rate, ground reaction force, and others, are collected. In order to analyze the dynamics of the astronaut’s condition over an extended period, it is necessary to perform a qualitative segmentation of their movement modes to independently assess the target metrics. This task becomes particularly relevant in the development of an autonomous life support system for astronauts that operates without direct supervision from Earth. The segmentation of target data is complicated by the presence of various anomalies, such as deviations from the predefined regimen, arbitrary and varying duration of mode transitions, hardware failures, and other factors. The paper includes a detailed review of several contemporary retrospective (offline) nonparametric methods for detecting multiple changepoints, which refer to sudden changes in the properties of the observed time series occurring at unknown moments. Special attention is given to algorithms and statistical measures that determine the homogeneity of the data and methods for detecting change points. The paper considers approaches based on dynamic programming and sliding window methods. The second part of the paper focuses on the numerical modeling of these methods using characteristic examples of experimental data, including both “simple” and “complex” speed profiles of movement. The analysis conducted allowed us to identify the preferred methods, which will be further evaluated on the complete dataset. Preference is given to methods that ensure the closeness of the markup to a reference one, potentially allow the detection of both boundaries of transient processes, as well as are robust relative to internal parameters.
-
A survey on the application of large language models in software engineering
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1715-1726Large Language Models (LLMs) are transforming software engineering by bridging the gap between natural language and programming languages. These models have revolutionized communication within development teams and the Software Development Life Cycle (SDLC) by enabling developers to interact with code using natural language, thereby improving workflow efficiency. This survey examines the impact of LLMs across various stages of the SDLC, including requirement gathering, system design, coding, debugging, testing, and documentation. LLMs have proven to be particularly useful in automating repetitive tasks such as code generation, refactoring, and bug detection, thus reducing manual effort and accelerating the development process. The integration of LLMs into the development process offers several advantages, including the automation of error correction, enhanced collaboration, and the ability to generate high-quality, functional code based on natural language input. Additionally, LLMs assist developers in understanding and implementing complex software requirements and design patterns. This paper also discusses the evolution of LLMs from simple code completion tools to sophisticated models capable of performing high-level software engineering tasks. However, despite their benefits, there are challenges associated with LLM adoption, such as issues related to model accuracy, interpretability, and potential biases. These limitations must be addressed to ensure the reliable deployment of LLMs in production environments. The paper concludes by identifying key areas for future research, including improving the adaptability of LLMs to specific software domains, enhancing their contextual understanding, and refining their capabilities to generate semantically accurate and efficient code. This survey provides valuable insights into the evolving role of LLMs in software engineering, offering a foundation for further exploration and practical implementation.
-
Platelet transport and adhesion in shear blood flow: the role of erythrocytes
Computer Research and Modeling, 2012, v. 4, no. 1, pp. 185-200Views (last year): 3. Citations: 8 (RSCI).Hemostatic system serves the organism for urgent repairs of damaged blood vessel walls. Its main components – platelets, the smallest blood cells, – are constantly contained in blood and quickly adhere to the site of injury. Platelet migration across blood flow and their hit with the wall are governed by blood flow conditions and, in particular, by the physical presence of other blood cells – erythrocytes. In this review we consider the main regularities of this influence, available mathematical models of platelet migration across blood flow and adhesion based on "convection-diffusion" PDEs, and discuss recent advances in this field. Understanding of the mechanisms of these processes is necessary for building of adequate mathematical models of hemostatic system functioning in blood flow in normal and pathological conditions.
-
On the using the differential schemes to transport equation with drain in grid modeling
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1149-1164Modern power transportation systems are the complex engineering systems. Such systems include both point facilities (power producers, consumers, transformer substations, etc.) and the distributed elements (f.e. power lines). Such structures are presented in the form of the graphs with different types of nodes under creating the mathematical models. It is necessary to solve the system of partial differential equations of the hyperbolic type to study the dynamic effects in such systems.
An approach similar to one already applied in modeling similar problems earlier used in the work. New variant of the splitting method was used proposed by the authors. Unlike most known works, the splitting is not carried out according to physical processes (energy transport without dissipation, separately dissipative processes). We used splitting to the transport equations with the drain and the exchange between Reimann’s invariants. This splitting makes possible to construct the hybrid schemes for Riemann invariants with a high order of approximation and minimal dissipation error. An example of constructing such a hybrid differential scheme is described for a single-phase power line. The difference scheme proposed is based on the analysis of the properties of the schemes in the space of insufficient coefficients.
Examples of the model problem numerical solutions using the proposed splitting and the difference scheme are given. The results of the numerical calculations shows that the difference scheme allows to reproduce the arising regions of large gradients. It is shown that the difference schemes also allow detecting resonances in such the systems.
-
Modernization as a global process: the experience of mathematical modeling
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 859-873The article analyzes empirical data on the long-term demographic and economic dynamics of the countries of the world for the period from the beginning of the 19th century to the present. Population and GDP of a number of countries of the world for the period 1500–2016 were selected as indicators characterizing the long-term demographic and economic dynamics of the countries of the world. Countries were chosen in such a way that they included representatives with different levels of development (developed and developing countries), as well as countries from different regions of the world (North America, South America, Europe, Asia, Africa). A specially developed mathematical model was used for modeling and data processing. The presented model is an autonomous system of differential equations that describes the processes of socio-economic modernization, including the process of transition from an agrarian society to an industrial and post-industrial one. The model contains the idea that the process of modernization begins with the emergence of an innovative sector in a traditional society, developing on the basis of new technologies. The population is gradually moving from the traditional sector to the innovation sector. Modernization is completed when most of the population moves to the innovation sector.
Statistical methods of data processing and Big Data methods, including hierarchical clustering were used. Using the developed algorithm based on the random descent method, the parameters of the model were identified and verified on the basis of empirical series, and the model was tested using statistical data reflecting the changes observed in developed and developing countries during the period of modernization taking place over the past centuries. Testing the model has demonstrated its high quality — the deviations of the calculated curves from statistical data are usually small and occur during periods of wars and economic crises. Thus, the analysis of statistical data on the long-term demographic and economic dynamics of the countries of the world made it possible to determine general patterns and formalize them in the form of a mathematical model. The model will be used to forecast demographic and economic dynamics in different countries of the world.
-
Fuzzy modeling the mechanism of transmitting panic state among people with various temperament species
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1079-1092A mass congestion of people always represents a potential danger and threat for their lives. In addition, every year in the world a very large number of people die because of the crush, the main cause of which is mass panic. Therefore, the study of the phenomenon of mass panic in view of her extreme social danger is an important scientific task. Available information, about the processes of her occurrence and spread refers to the category inaccurate. Therefore, the theory of fuzzy sets has been chosen as a tool for developing a mathematical model of the mechanism of transmitting panic state among people with various temperament species.
When developing an fuzzy model, it was assumed that panic, from the epicenter of the shocking stimulus, spreads among people according to the wave principle, passing at different frequencies through different environments (types of human temperament), and is determined by the speed and intensity of the circular reaction of the mechanism of transmitting panic state among people. Therefore, the developed fuzzy model, along with two inputs, has two outputs — the speed and intensity of the circular reaction. In the block «Fuzzyfication», the degrees of membership of the numerical values of the input parameters to fuzzy sets are calculated. The «Inference» block at the input receives degrees of belonging for each input parameter and at the output determines the resulting function of belonging the speed of the circular reaction and her derivative, which is a function of belonging for the intensity of the circular reaction. In the «Defuzzyfication» block, using the center of gravity method, a quantitative value is determined for each output parameter. The quality assessment of the developed fuzzy model, carried out by calculating of the determination coefficient, showed that the developed mathematical model belongs to the category of good quality models.
The result obtained in the form of quantitative assessments of the circular reaction makes it possible to improve the quality of understanding of the mental processes occurring during the transmission of the panic state among people. In addition, this makes it possible to improve existing and develop new models of chaotic humans behaviors. Which are designed to develop effective solutions in crisis situations, aimed at full or partial prevention of the spread of mass panic, leading to the emergence of panic flight and the appearance of human casualties.
-
Efficient and error-free information hiding in the hybrid domain of digital images using metaheuristic optimization
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 197-210Data hiding in digital images is a promising direction of cybersecurity. Digital steganography methods provide imperceptible transmission of secret data over an open communication channel. The information embedding efficiency depends on the embedding imperceptibility, capacity, and robustness. These quality criteria are mutually inverse, and the improvement of one indicator usually leads to the deterioration of the others. A balance between them can be achieved using metaheuristic optimization. Metaheuristics are a class of optimization algorithms that find an optimal, or close to an optimal solution for a variety of problems, including those that are difficult to formalize, by simulating various natural processes, for example, the evolution of species or the behavior of animals. In this study, we propose an approach to data hiding in the hybrid spatial-frequency domain of digital images based on metaheuristic optimization. Changing a block of image pixels according to some change matrix is considered as an embedding operation. We select the change matrix adaptively for each block using metaheuristic optimization algorithms. In this study, we compare the performance of three metaheuristics such as genetic algorithm, particle swarm optimization, and differential evolution to find the best change matrix. Experimental results showed that the proposed approach provides high imperceptibility of embedding, high capacity, and error-free extraction of embedded information. At the same time, storage of change matrices for each block is not required for further data extraction. This improves user experience and reduces the chance of an attacker discovering the steganographic attachment. Metaheuristics provided an increase in imperceptibility indicator, estimated by the PSNR metric, and the capacity of the previous algorithm for embedding information into the coefficients of the discrete cosine transform using the QIM method [Evsutin, Melman, Meshcheryakov, 2021] by 26.02% and 30.18%, respectively, for the genetic algorithm, 26.01% and 19.39% for particle swarm optimization, 27.30% and 28.73% for differential evolution.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




