All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Dissipative Stochastic Dynamic Model of Language Signs Evolution
Computer Research and Modeling, 2011, v. 3, no. 2, pp. 103-124We offer the dissipative stochastic dynamic model of the language sign evolution, satisfying to the principle of the least action, one of fundamental variational principles of the Nature. The model conjectures the Poisson nature of the birth flow of language signs and the exponential distribution of their associative-semantic potential (ASP). The model works with stochastic difference equations of the special type for dissipative processes. The equation for momentary polysemy distribution and frequency-rank distribution drawn from our model do not differs significantly (by Kolmogorov-Smirnov’s test) from empirical distributions, got from main Russian and English explanatory dictionaries as well as frequency dictionaries of them.
-
Computational treatment of natural language text for intent detection
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1539-1554Intent detection plays a crucial role in task-oriented conversational systems. To understand the user’s goal, the system relies on its intent detector to classify the user’s utterance, which may be expressed in different forms of natural language, into intent classes. However, lack of data, and the efficacy of intent detection systems has been hindered by the fact that the user’s intent text is typically characterized by short, general sentences and colloquial expressions. The process of algorithmically determining user intent from a given statement is known as intent detection. The goal of this study is to develop an intent detection model that will accurately classify and detect user intent. The model calculates the similarity score of the three models used to determine their similarities. The proposed model uses Contextual Semantic Search (CSS) capabilities for semantic search, Latent Dirichlet Allocation (LDA) for topic modeling, the Bidirectional Encoder Representations from Transformers (BERT) semantic matching technique, and the combination of LDA and BERT for text classification and detection. The dataset acquired is from the broad twitter corpus (BTC) and comprises various meta data. To prepare the data for analysis, a pre-processing step was applied. A sample of 1432 instances were selected out of the 5000 available datasets because manual annotation is required and could be time-consuming. To compare the performance of the model with the existing model, the similarity scores, precision, recall, f1 score, and accuracy were computed. The results revealed that LDA-BERT achieved an accuracy of 95.88% for intent detection, BERT with an accuracy of 93.84%, and LDA with an accuracy of 92.23%. This shows that LDA-BERT performs better than other models. It is hoped that the novel model will aid in ensuring information security and social media intelligence. For future work, an unsupervised LDA-BERT without any labeled data can be studied with the model.
-
Extraction of characters and events from narratives
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1593-1600Events and character extraction from narratives is a fundamental task in text analysis. The application of event extraction techniques ranges from the summarization of different documents to the analysis of medical notes. We identify events based on a framework named “four W” (Who, What, When, Where) to capture all the essential components like the actors, actions, time, and places. In this paper, we explore two prominent techniques for event extraction: statistical parsing of syntactic trees and semantic role labeling. While these techniques were investigated by different researchers in isolation, we directly compare the performance of the two approaches on our custom dataset, which we have annotated.
Our analysis shows that statistical parsing of syntactic trees outperforms semantic role labeling in event and character extraction, especially in identifying specific details. Nevertheless, semantic role labeling demonstrate good performance in correct actor identification. We evaluate the effectiveness of both approaches by comparing different metrics like precision, recall, and F1-scores, thus, demonstrating their respective advantages and limitations.
Moreover, as a part of our work, we propose different future applications of event extraction techniques that we plan to investigate. The areas where we want to apply these techniques include code analysis and source code authorship attribution. We consider using event extraction to retrieve key code elements as variable assignments and function calls, which can further help us to analyze the behavior of programs and identify the project’s contributors. Our work provides novel understandings of the performance and efficiency of statistical parsing and semantic role labeling techniques, offering researchers new directions for the application of these techniques.
-
Modeling the thermal field of stationary symmetric bodies in rarefied low-temperature plasma
Computer Research and Modeling, 2025, v. 17, no. 1, pp. 73-91The work investigates the process of self-consistent relaxation of the region of disturbances created in a rarefied binary low-temperature plasma by a stationary charged ball or cylinder with an absorbing surface. A feature of such problems is their self-consistent kinetic nature, in which it is impossible to separate the processes of transfer in phase space and the formation of an electromagnetic field. A mathematical model is presented that makes it possible to describe and analyze the state of the gas, electric and thermal fields in the vicinity of the body. The multidimensionality of the kinetic formulation creates certain problems in the numerical solution, therefore a curvilinear system of nonholonomic coordinates was selected for the problem, which minimizes its phase space, which contributes to increasing the efficiency of numerical methods. For such coordinates, the form of the Vlasov kinetic equation has been justified and analyzed. To solve it, a variant of the large particle method with a constant form factor was used. The calculations used a moving grid that tracks the displacement of the distribution function carrier in the phase space, which further reduced the volume of the controlled region of the phase space. Key details of the model and numerical method are revealed. The model and the method are implemented as code in the Matlab language. Using the example of solving a problem for a ball, the presence of significant disequilibrium and anisotropy in the particle velocity distribution in the disturbed zone is shown. Based on the calculation results, pictures of the evolution of the structure of the particle distribution function, profiles of the main macroscopic characteristics of the gas — concentration, current, temperature and heat flow, and characteristics of the electric field in the disturbed region are presented. The mechanism of heating of attracted particles in the disturbed zone is established and some important features of the process of formation of heat flow are shown. The results obtained are well explainable from a physical point of view, which confirms the adequacy of the model and the correct operation of the software tool. The creation and testing of a basis for the development in the future of tools for solving more complex problems of modeling the behavior of ionized gases near charged bodies is noted.
The work will be useful to specialists in the field of mathematical modeling, heat and mass transfer processes, lowtemperature plasma physics, postgraduate students and senior students specializing in the indicated areas.
-
Development of and research into a rigid algorithm for analyzing Twitter publications and its influence on the movements of the cryptocurrency market
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 157-170Social media is a crucial indicator of the position of assets in the financial market. The paper describes the rigid solution for the classification problem to determine the influence of social media activity on financial market movements. Reputable crypto traders influencers are selected. Twitter posts packages are used as data. The methods of text, which are characterized by the numerous use of slang words and abbreviations, and preprocessing consist in lemmatization of Stanza and the use of regular expressions. A word is considered as an element of a vector of a data unit in the course of solving the problem of binary classification. The best markup parameters for processing Binance candles are searched for. Methods of feature selection, which is necessary for a precise description of text data and the subsequent process of establishing dependence, are represented by machine learning and statistical analysis. First, the feature selection is used based on the information criterion. This approach is implemented in a random forest model and is relevant for the task of feature selection for splitting nodes in a decision tree. The second one is based on the rigid compilation of a binary vector during a rough check of the presence or absence of a word in the package and counting the sum of the elements of this vector. Then a decision is made depending on the superiority of this sum over the threshold value that is predetermined previously by analyzing the frequency distribution of mentions of the word. The algorithm used to solve the problem was named benchmark and analyzed as a tool. Similar algorithms are often used in automated trading strategies. In the course of the study, observations of the influence of frequently occurring words, which are used as a basis of dimension 2 and 3 in vectorization, are described as well.
-
Enhancing DevSecOps with continuous security requirements analysis and testing
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1687-1702The fast-paced environment of DevSecOps requires integrating security at every stage of software development to ensure secure, compliant applications. Traditional methods of security testing, often performed late in the development cycle, are insufficient to address the unique challenges of continuous integration and continuous deployment (CI/CD) pipelines, particularly in complex, high-stakes sectors such as industrial automation. In this paper, we propose an approach that automates the analysis and testing of security requirements by embedding requirements verification into the CI/CD pipeline. Our method employs the ARQAN tool to map high-level security requirements to Security Technical Implementation Guides (STIGs) using semantic search, and RQCODE to formalize these requirements as code, providing testable and enforceable security guidelines.We implemented ARQAN and RQCODE within a CI/CD framework, integrating them with GitHub Actions for realtime security checks and automated compliance verification. Our approach supports established security standards like IEC 62443 and automates security assessment starting from the planning phase, enhancing the traceability and consistency of security practices throughout the pipeline. Evaluation of this approach in collaboration with an industrial automation company shows that it effectively covers critical security requirements, achieving automated compliance for 66.15% of STIG guidelines relevant to the Windows 10 platform. Feedback from industry practitioners further underscores its practicality, as 85% of security requirements mapped to concrete STIG recommendations, with 62% of these requirements having matching testable implementations in RQCODE. This evaluation highlights the approach’s potential to shift security validation earlier in the development process, contributing to a more resilient and secure DevSecOps lifecycle.
-
Extracting knowledge from text messages: overview and state-of-the-art
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.
-
Development of and research on an algorithm for distinguishing features in Twitter publications for a classification problem with known markup
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 171-183Social media posts play an important role in demonstration of financial market state, and their analysis is a powerful tool for trading. The article describes the result of a study of the impact of social media activities on the movement of the financial market. The top authoritative influencers are selected. Twitter posts are used as data. Such texts usually include slang and abbreviations, so methods for preparing primary text data, including Stanza, regular expressions are presented. Two approaches to the representation of a point in time in the format of text data are considered. The difference of the influence of a single tweet or a whole package consisting of tweets collected over a certain period of time is investigated. A statistical approach in the form of frequency analysis is also considered, metrics defined by the significance of a particular word when identifying the relationship between price changes and Twitter posts are introduced. Frequency analysis involves the study of the occurrence distributions of various words and bigrams in the text for positive, negative or general trends. To build the markup, changes in the market are processed into a binary vector using various parameters, thus setting the task of binary classification. The parameters for Binance candlesticks are sorted out for better description of the movement of the cryptocurrency market, their variability is also explored in this article. Sentiment is studied using Stanford Core NLP. The result of statistical analysis is relevant to feature selection for further binary or multiclass classification tasks. The presented methods of text analysis contribute to the increase of the accuracy of models designed to solve natural language processing problems by selecting words, improving the quality of vectorization. Such algorithms are often used in automated trading strategies to predict the price of an asset, the trend of its movement.
-
Generating database schema from requirement specification based on natural language processing and large language model
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1703-1713A Large Language Model (LLM) is an advanced artificial intelligence algorithm that utilizes deep learning methodologies and extensive datasets to process, understand, and generate humanlike text. These models are capable of performing various tasks, such as summarization, content creation, translation, and predictive text generation, making them highly versatile in applications involving natural language understanding. Generative AI, often associated with LLMs, specifically focuses on creating new content, particularly text, by leveraging the capabilities of these models. Developers can harness LLMs to automate complex processes, such as extracting relevant information from system requirement documents and translating them into a structured database schema. This capability has the potential to streamline the database design phase, saving significant time and effort while ensuring that the resulting schema aligns closely with the given requirements. By integrating LLM technology with Natural Language Processing (NLP) techniques, the efficiency and accuracy of generating database schemas based on textual requirement specifications can be significantly enhanced. The proposed tool will utilize these capabilities to read system requirement specifications, which may be provided as text descriptions or as Entity-Relationship Diagrams (ERDs). It will then analyze the input and automatically generate a relational database schema in the form of SQL commands. This innovation eliminates much of the manual effort involved in database design, reduces human errors, and accelerates development timelines. The aim of this work is to provide a tool can be invaluable for software developers, database architects, and organizations aiming to optimize their workflow and align technical deliverables with business requirements seamlessly.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"