All issues
- 2026 Vol. 18
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Development of and research into a rigid algorithm for analyzing Twitter publications and its influence on the movements of the cryptocurrency market
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 157-170Social media is a crucial indicator of the position of assets in the financial market. The paper describes the rigid solution for the classification problem to determine the influence of social media activity on financial market movements. Reputable crypto traders influencers are selected. Twitter posts packages are used as data. The methods of text, which are characterized by the numerous use of slang words and abbreviations, and preprocessing consist in lemmatization of Stanza and the use of regular expressions. A word is considered as an element of a vector of a data unit in the course of solving the problem of binary classification. The best markup parameters for processing Binance candles are searched for. Methods of feature selection, which is necessary for a precise description of text data and the subsequent process of establishing dependence, are represented by machine learning and statistical analysis. First, the feature selection is used based on the information criterion. This approach is implemented in a random forest model and is relevant for the task of feature selection for splitting nodes in a decision tree. The second one is based on the rigid compilation of a binary vector during a rough check of the presence or absence of a word in the package and counting the sum of the elements of this vector. Then a decision is made depending on the superiority of this sum over the threshold value that is predetermined previously by analyzing the frequency distribution of mentions of the word. The algorithm used to solve the problem was named benchmark and analyzed as a tool. Similar algorithms are often used in automated trading strategies. In the course of the study, observations of the influence of frequently occurring words, which are used as a basis of dimension 2 and 3 in vectorization, are described as well.
-
Enhancing DevSecOps with continuous security requirements analysis and testing
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1687-1702The fast-paced environment of DevSecOps requires integrating security at every stage of software development to ensure secure, compliant applications. Traditional methods of security testing, often performed late in the development cycle, are insufficient to address the unique challenges of continuous integration and continuous deployment (CI/CD) pipelines, particularly in complex, high-stakes sectors such as industrial automation. In this paper, we propose an approach that automates the analysis and testing of security requirements by embedding requirements verification into the CI/CD pipeline. Our method employs the ARQAN tool to map high-level security requirements to Security Technical Implementation Guides (STIGs) using semantic search, and RQCODE to formalize these requirements as code, providing testable and enforceable security guidelines.We implemented ARQAN and RQCODE within a CI/CD framework, integrating them with GitHub Actions for realtime security checks and automated compliance verification. Our approach supports established security standards like IEC 62443 and automates security assessment starting from the planning phase, enhancing the traceability and consistency of security practices throughout the pipeline. Evaluation of this approach in collaboration with an industrial automation company shows that it effectively covers critical security requirements, achieving automated compliance for 66.15% of STIG guidelines relevant to the Windows 10 platform. Feedback from industry practitioners further underscores its practicality, as 85% of security requirements mapped to concrete STIG recommendations, with 62% of these requirements having matching testable implementations in RQCODE. This evaluation highlights the approach’s potential to shift security validation earlier in the development process, contributing to a more resilient and secure DevSecOps lifecycle.
-
Reducing computational complexity in agent-based epidemiological model calibration: application of deep learning surrogates
Computer Research and Modeling, 2026, v. 18, no. 1, pp. 185-200Acute respiratory infections are a major public health concern because they are the leading cause of illness and death in many countries. Therefore, there is great interest in developing models and methods capable of modeling the spread of these infections within communities, with the aim of controlling outbreaks and preventing their spread. Agent-based models (ABM) are one of the most important tools in epidemiological research for modeling epidemic dynamics in realistic populations, but they face significant challenges in terms of computational complexity in their operation and calibration of epidemiological data, as parameter estimation typically requires repeated simulations across large parameter spaces to determine plausible values for key epidemiological parameters. This paper addresses the problem of alleviating computational constraints in the inverse problem of calibrating an ABM model for simulating the spread of respiratory infections in Saint Petersburg. The paper proposes the application of machine learning surrogate to link epidemic trajectories to underlying epidemiological parameters, enabling them to quickly infer parameter estimates from observed epidemic data. This is done by formulating the task of calibrating ABMs against epidemiological data as a supervised learning problem, where sequences extracted from epidemiological trajectories are associated with underlying epidemiological parameters. The research was based on evaluating the performance of attention-based sequence modeling, probabilistic deep learning, and distributional regression for inferring parameter estimates from truncated sequences of epidemic trajectories. Experimental evaluations have demonstrated the effectiveness of this approach and its practical and straightforward application. The results also indicated the superiority of attention-based sequence modeling, as it showed more consistent performance across metrics and horizons in accurate parameter estimation and credible uncertainty quantification. Distributional regression modeling also showed good performance with specific strengths in point accuracy while probabilistic deep learning performed poorly, especially at longer input horizons.
-
Comparison of Arctic zone RF companies with different Polar Index ratings by economic criteria with the help of machine learning tools
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 201-215The paper presents a comparative analysis of the enterprises of the Arctic Zone of the Russian Federation (AZ RF) on economic indicators in accordance with the rating of the Polar index. This study includes numerical data of 193 enterprises located in the AZ RF. Machine learning methods are applied, both standard, from open source, and own original methods — the method of Optimally Reliable Partitions (ORP), the method of Statistically Weighted Syndromes (SWS). Held split, indicating the maximum value of the functional quality, this study used the simplest family of different one-dimensional partition with a single boundary point, as well as a collection of different two-dimensional partition with one boundary point on each of the two combining variables. Permutation tests allow not only to evaluate the reliability of the data of the revealed regularities, but also to exclude partitions with excessive complexity from the set of the revealed regularities. Patterns connected the class number and economic indicators are revealed using the SDT method on one-dimensional indicators. The regularities which are revealed within the framework of the simplest one-dimensional model with one boundary point and with significance not worse than p < 0.001 are also presented in the given study. The so-called sliding control method was used for reliable evaluation of such diagnostic ability. As a result of these studies, a set of methods that had sufficient effectiveness was identified. The collective method based on the results of several machine learning methods showed the high importance of economic indicators for the division of enterprises in accordance with the rating of the Polar index. Our study proved and showed that those companies that entered the top Rating of the Polar index are generally recognized by financial indicators among all companies in the Arctic Zone. However it would be useful to supplement the list of indicators with ecological and social criteria.
-
Analysis of the effectiveness of machine learning methods in the problem of gesture recognition based on the data of electromyographic signals
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 175-194Gesture recognition is an urgent challenge in developing systems of human-machine interfaces. We analyzed machine learning methods for gesture classification based on electromyographic muscle signals to identify the most effective one. Methods such as the naive Bayesian classifier (NBC), logistic regression, decision tree, random forest, gradient boosting, support vector machine (SVM), $k$-nearest neighbor algorithm, and ensembles (NBC and decision tree, NBC and gradient boosting, gradient boosting and decision tree) were considered. Electromyography (EMG) was chosen as a method of obtaining information about gestures. This solution does not require the location of the hand in the field of view of the camera and can be used to recognize finger movements. To test the effectiveness of the selected methods of gesture recognition, a device was developed for recording the EMG signal, which includes three electrodes and an EMG sensor connected to the microcontroller and the power supply. The following gestures were chosen: clenched fist, “thumb up”, “Victory”, squeezing an index finger and waving a hand from right to left. Accuracy, precision, recall and execution time were used to evaluate the effectiveness of classifiers. These parameters were calculated for three options for the location of EMG electrodes on the forearm. According to the test results, the most effective methods are $k$-nearest neighbors’ algorithm, random forest and the ensemble of NBC and gradient boosting, the average accuracy of ensemble for three electrode positions was 81.55%. The position of the electrodes was also determined at which machine learning methods achieve the maximum accuracy. In this position, one of the differential electrodes is located at the intersection of the flexor digitorum profundus and flexor pollicis longus, the second — above the flexor digitorum superficialis.
-
Extracting knowledge from text messages: overview and state-of-the-art
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.
-
Frequency, time, and spatial electroencephalogram changes after COVID-19 during a simple speech task
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 691-701We found a predominance of α-rhythm patterns in the left hemisphere in healthy people compared to people with COVID-19 history. Moreover, we observe a significant decrease in the left hemisphere contribution to the speech center area in people who have undergone COVID-19 when performing speech tasks.
Our findings show that the signal in healthy subjects is more spatially localized and synchronized between hemispheres when performing tasks compared to people who recovered from COVID-19. We also observed a decrease in low frequencies in both hemispheres after COVID-19.
EEG-patterns of COVID-19 are detectable in an unusual frequency domain. What is usually considered noise in electroencephalographic (EEG) data carries information that can be used to determine whether or not a person has had COVID-19. These patterns can be interpreted as signs of hemispheric desynchronization, premature brain ageing, and more significant brain strain when performing simple tasks compared to people who did not have COVID-19.
In our work, we have shown the applicability of neural networks in helping to detect the long-term effects of COVID-19 on EEG-data. Furthermore, our data following other studies supported the hypothesis of the severity of the long-term effects of COVID-19 detected on the EEG-data of EEG-based BCI. The presented findings of functional activity of the brain– computer interface make it possible to use machine learning methods on simple, non-invasive brain–computer interfaces to detect post-COVID syndrome and develop progress in neurorehabilitation.
-
Random forest of risk factors as a predictive tool for adverse events in clinical medicine
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 987-1004The aim of study was to develop an ensemble machine learning method for constructing interpretable predictive models and to validate it using the example of predicting in-hospital mortality (IHM) in patients with ST-segment elevation myocardial infarction (STEMI).
A retrospective cohort study was conducted using data from 5446 electronic medical records of STEMI patients who underwent percutaneous coronary intervention (PCI). Patients were divided into two groups: 335 (6.2%) patients who died during hospitalization and 5111 (93.8%) patients with a favourable in-hospital outcome. A pool of potential predictors was formed using statistical methods. Through multimetric categorization (minimizing p-values, maximizing the area under the ROC curve (AUC), and SHAP value analysis), decision trees, and multivariable logistic regression (MLR), predictors were transformed into risk factors for IHM. Predictive models for IHM were developed using MLR, Random Forest Risk Factors (RandFRF), Stochastic Gradient Boosting (XGboost), Random Forest (RF), Adaptive boosting, Gradient Boosting, Light Gradient-Boosting Machine, Categorical Boosting (CatBoost), Explainable Boosting Machine and Stacking methods.
Authors developed the RandFRF method, which integrates the predictive outcomes of modified decision trees, identifies risk factors and ranks them based on their contribution to the risk of adverse outcomes. RandFRF enables the development of predictive models with high discriminative performance (AUC 0.908), comparable to models based on CatBoost and Stacking (AUC 0.904 and 0.908, respectively). In turn, risk factors provide clinicians with information on the patient’s risk group classification and the extent of their impact on the probability of IHM. The risk factors identified by RandFRF can serve not only as rationale for the prediction results but also as a basis for developing more accurate models.
-
Development of and research on machine learning algorithms for solving the classification problem in Twitter publications
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 185-195Posts on social networks can both predict the movement of the financial market, and in some cases even determine its direction. The analysis of posts on Twitter contributes to the prediction of cryptocurrency prices. The specificity of the community is represented in a special vocabulary. Thus, slang expressions and abbreviations are used in posts, the presence of which makes it difficult to vectorize text data, as a result of which preprocessing methods such as Stanza lemmatization and the use of regular expressions are considered. This paper describes created simplest machine learning models, which may work despite such problems as lack of data and short prediction timeframe. A word is considered as an element of a binary vector of a data unit in the course of the problem of binary classification solving. Basic words are determined according to the frequency analysis of mentions of a word. The markup is based on Binance candlesticks with variable parameters for a more accurate description of the trend of price changes. The paper introduces metrics that reflect the distribution of words depending on their belonging to a positive or negative classes. To solve the classification problem, we used a dense model with parameters selected by Keras Tuner, logistic regression, a random forest classifier, a naive Bayesian classifier capable of working with a small sample, which is very important for our task, and the k-nearest neighbors method. The constructed models were compared based on the accuracy metric of the predicted labels. During the investigation we recognized that the best approach is to use models which predict price movements of a single coin. Our model deals with posts that mention LUNA project, which no longer exist. This approach to solving binary classification of text data is widely used to predict the price of an asset, the trend of its movement, which is often used in automated trading.
-
Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.
We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.
Keywords: convex optimization, distributed optimization.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




