All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Modelling of trends in the volume and structure of accumulated credit indebtedness in the banking system
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 965-978The volume and structure of accumulated credit debt to the banking system depends on many factors, the most important of which is the level of interest rates. The correct assessment of borrowers’ reaction to the changes in the monetary policy allows to develop econometric models, representing the structure of the credit portfolio in the banking system by terms of lending. These models help to calculate indicators characterizing the level of interest rate risk in the whole system. In the study, we carried out the identification of four types of models: discrete linear model based on transfer functions; the state-space model; the classical econometric model ARMAX, and a nonlinear Hammerstein –Wiener model. To describe them, we employed the formal language of automatic control theory; to identify the model, we used the MATLAB software pack-age. The study revealed that the discrete linear state-space model is most suitable for short-term forecasting of both the volume and the structure of credit debt, which in turn allows to predict trends in the structure of accumulated credit debt on the forecasting horizon of 1 year. The model based on the real data has shown a high sensitivity of the structure of credit debt by pay back periods reaction to the changes in the Ñentral Bank monetary policy. Thus, a sharp increase in interest rates in response to external market shocks leads to shortening of credit terms by borrowers, at the same time the overall level of debt rises, primarily due to the increasing revaluation of nominal debt. During the stable falling trend of interest rates, the structure shifts toward long-term debts.
-
Enhancing DevSecOps with continuous security requirements analysis and testing
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1687-1702The fast-paced environment of DevSecOps requires integrating security at every stage of software development to ensure secure, compliant applications. Traditional methods of security testing, often performed late in the development cycle, are insufficient to address the unique challenges of continuous integration and continuous deployment (CI/CD) pipelines, particularly in complex, high-stakes sectors such as industrial automation. In this paper, we propose an approach that automates the analysis and testing of security requirements by embedding requirements verification into the CI/CD pipeline. Our method employs the ARQAN tool to map high-level security requirements to Security Technical Implementation Guides (STIGs) using semantic search, and RQCODE to formalize these requirements as code, providing testable and enforceable security guidelines.We implemented ARQAN and RQCODE within a CI/CD framework, integrating them with GitHub Actions for realtime security checks and automated compliance verification. Our approach supports established security standards like IEC 62443 and automates security assessment starting from the planning phase, enhancing the traceability and consistency of security practices throughout the pipeline. Evaluation of this approach in collaboration with an industrial automation company shows that it effectively covers critical security requirements, achieving automated compliance for 66.15% of STIG guidelines relevant to the Windows 10 platform. Feedback from industry practitioners further underscores its practicality, as 85% of security requirements mapped to concrete STIG recommendations, with 62% of these requirements having matching testable implementations in RQCODE. This evaluation highlights the approach’s potential to shift security validation earlier in the development process, contributing to a more resilient and secure DevSecOps lifecycle.
-
Analysis of socio-informational influence through the examples of US wars in Korea, Vietnam, and Iraq
Computer Research and Modeling, 2014, v. 6, no. 1, pp. 167-184Views (last year): 2. Citations: 3 (RSCI).In the first section of the paper a definition of presentation (perception) functions — components of individual’s subjective view of the world — are proposed. Using the basic psychophysical law formulated by S. Stevens, and relying on the hypotheses of socialization, rationality, individual choice, complexity of informational influences, dynamics of ideas and perceptions, and accessibility, formal dependence was derived allowing to calculate the function of presentation (perception) for probabilistic indicators (with known distribution function or subjective probability) and of interval type. In the second and third sections parameters of the presentation function according to surveys of the U.S. population related to the war in Korea, Vietnam, and Iraq are estimated.
-
Extracting knowledge from text messages: overview and state-of-the-art
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.
-
Statistical analysis of bigrams of specialized texts
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 243-254The method of the stochastic matrix spectrum analysis is used to build an indicator that allows to determine the subject of scientific texts without keywords usage. This matrix is a matrix of conditional probabilities of bigrams, built on the statistics of the alphabet characters in the text without spaces, numbers and punctuation marks. Scientific texts are classified according to the mutual arrangement of invariant subspaces of the matrix of conditional probabilities of pairs of letter combinations. The separation indicator is the value of the cosine of the angle between the right and left eigenvectors corresponding to the maximum and minimum eigenvalues. The computational algorithm uses a special representation of the dichotomy parameter, which is the integral of the square norm of the resolvent of the stochastic matrix of bigrams along the circumference of a given radius in the complex plane. The tendency of the integral to infinity testifies to the approximation of the integration circuit to the eigenvalue of the matrix. The paper presents the typical distribution of the indicator of identification of specialties. For statistical analysis were analyzed dissertations on the main 19 specialties without taking into account the classification within the specialty, 20 texts for the specialty. It was found that the empirical distributions of the cosine of the angle for the mathematical and Humanities specialties do not have a common domain, so they can be formally divided by the value of this indicator without errors. Although the body of texts was not particularly large, nevertheless, in the case of arbitrary selection of dissertations, the identification error at the level of 2 % seems to be a very good result compared to the methods based on semantic analysis. It was also found that it is possible to make a text pattern for each of the specialties in the form of a reference matrix of bigrams, in the vicinity of which in the norm of summable functions it is possible to accurately identify the theme of the written scientific work, without using keywords. The proposed method can be used as a comparative indicator of greater or lesser severity of the scientific text or as an indicator of compliance of the text to a certain scientific level.
-
Modernization as a global process: the experience of mathematical modeling
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 859-873The article analyzes empirical data on the long-term demographic and economic dynamics of the countries of the world for the period from the beginning of the 19th century to the present. Population and GDP of a number of countries of the world for the period 1500–2016 were selected as indicators characterizing the long-term demographic and economic dynamics of the countries of the world. Countries were chosen in such a way that they included representatives with different levels of development (developed and developing countries), as well as countries from different regions of the world (North America, South America, Europe, Asia, Africa). A specially developed mathematical model was used for modeling and data processing. The presented model is an autonomous system of differential equations that describes the processes of socio-economic modernization, including the process of transition from an agrarian society to an industrial and post-industrial one. The model contains the idea that the process of modernization begins with the emergence of an innovative sector in a traditional society, developing on the basis of new technologies. The population is gradually moving from the traditional sector to the innovation sector. Modernization is completed when most of the population moves to the innovation sector.
Statistical methods of data processing and Big Data methods, including hierarchical clustering were used. Using the developed algorithm based on the random descent method, the parameters of the model were identified and verified on the basis of empirical series, and the model was tested using statistical data reflecting the changes observed in developed and developing countries during the period of modernization taking place over the past centuries. Testing the model has demonstrated its high quality — the deviations of the calculated curves from statistical data are usually small and occur during periods of wars and economic crises. Thus, the analysis of statistical data on the long-term demographic and economic dynamics of the countries of the world made it possible to determine general patterns and formalize them in the form of a mathematical model. The model will be used to forecast demographic and economic dynamics in different countries of the world.
-
Efficient and error-free information hiding in the hybrid domain of digital images using metaheuristic optimization
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 197-210Data hiding in digital images is a promising direction of cybersecurity. Digital steganography methods provide imperceptible transmission of secret data over an open communication channel. The information embedding efficiency depends on the embedding imperceptibility, capacity, and robustness. These quality criteria are mutually inverse, and the improvement of one indicator usually leads to the deterioration of the others. A balance between them can be achieved using metaheuristic optimization. Metaheuristics are a class of optimization algorithms that find an optimal, or close to an optimal solution for a variety of problems, including those that are difficult to formalize, by simulating various natural processes, for example, the evolution of species or the behavior of animals. In this study, we propose an approach to data hiding in the hybrid spatial-frequency domain of digital images based on metaheuristic optimization. Changing a block of image pixels according to some change matrix is considered as an embedding operation. We select the change matrix adaptively for each block using metaheuristic optimization algorithms. In this study, we compare the performance of three metaheuristics such as genetic algorithm, particle swarm optimization, and differential evolution to find the best change matrix. Experimental results showed that the proposed approach provides high imperceptibility of embedding, high capacity, and error-free extraction of embedded information. At the same time, storage of change matrices for each block is not required for further data extraction. This improves user experience and reduces the chance of an attacker discovering the steganographic attachment. Metaheuristics provided an increase in imperceptibility indicator, estimated by the PSNR metric, and the capacity of the previous algorithm for embedding information into the coefficients of the discrete cosine transform using the QIM method [Evsutin, Melman, Meshcheryakov, 2021] by 26.02% and 30.18%, respectively, for the genetic algorithm, 26.01% and 19.39% for particle swarm optimization, 27.30% and 28.73% for differential evolution.
-
Features of social interactions: the basic model
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1323-1335The paper presents the results of research on the creation of a mathematical model of moral choice based on the development of the approach proposed by V. A. Lefebvre. Unlike V. A. Lefebvre, who considered a very speculative situation of a subject’s moral choice between abstract “good” and “evil” under pressure from the outside world, taking into account the subjective perception of this pressure by the subject, our study considers a more mundane and practically significant situation. The case is considered when the subject, when making decisions, is guided by his individual perception of the outside world (which may be distorted, for example, due to external purposeful informational influence on the subject and manipulation of his consciousness), and “good” and “evil” are not abstract, but are conditioned by a value system adopted in a particular society under consideration and tied to a specific ideology/religion, which may be different for different societies.
As a result of the conducted research, a basic mathematical model has been developed, and special cases of its application have been considered. Some patterns related to moral choice are revealed, and their formal description is given. In particular, the situation of manipulation of consciousness is considered in the language of the model, the law of reducing the “morality” of a society consisting of so-called free subjects (that is, those who strive to act in accordance with their intentions and correspond in their actions to the image of their “I”) is formulated.
Keywords: moral choice, mathematical model, intention, readiness function, value system, free subject. -
Additive regularizarion of topic models with fast text vectorizartion
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.
-
Personalization of mathematical models in cardiology: obstacles and perspectives
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 911-930Most biomechanical tasks of interest to clinicians can be solved only using personalized mathematical models. Such models allow to formalize and relate key pathophysiological processes, basing on clinically available data evaluate non-measurable parameters that are important for the diagnosis of diseases, predict the result of a therapeutic or surgical intervention. The use of models in clinical practice imposes additional restrictions: clinicians require model validation on clinical cases, the speed and automation of the entire calculated technological chain, from processing input data to obtaining a result. Limitations on the simulation time, determined by the time of making a medical decision (of the order of several minutes), imply the use of reduction methods that correctly describe the processes under study within the framework of reduced models or machine learning tools.
Personalization of models requires patient-oriented parameters, personalized geometry of a computational domain and generation of a computational mesh. Model parameters are estimated by direct measurements, or methods of solving inverse problems, or methods of machine learning. The requirement of personalization imposes severe restrictions on the number of fitted parameters that can be measured under standard clinical conditions. In addition to parameters, the model operates with boundary conditions that must take into account the patient’s characteristics. Methods for setting personalized boundary conditions significantly depend on the clinical setting of the problem and clinical data. Building a personalized computational domain through segmentation of medical images and generation of the computational grid, as a rule, takes a lot of time and effort due to manual or semi-automatic operations. Development of automated methods for setting personalized boundary conditions and segmentation of medical images with the subsequent construction of a computational grid is the key to the widespread use of mathematical modeling in clinical practice.
The aim of this work is to review our solutions for personalization of mathematical models within the framework of three tasks of clinical cardiology: virtual assessment of hemodynamic significance of coronary artery stenosis, calculation of global blood flow after hemodynamic correction of complex heart defects, calculating characteristics of coaptation of reconstructed aortic valve.
Keywords: computational biomechanics, personalized model.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




