All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Generating database schema from requirement specification based on natural language processing and large language model
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1703-1713A Large Language Model (LLM) is an advanced artificial intelligence algorithm that utilizes deep learning methodologies and extensive datasets to process, understand, and generate humanlike text. These models are capable of performing various tasks, such as summarization, content creation, translation, and predictive text generation, making them highly versatile in applications involving natural language understanding. Generative AI, often associated with LLMs, specifically focuses on creating new content, particularly text, by leveraging the capabilities of these models. Developers can harness LLMs to automate complex processes, such as extracting relevant information from system requirement documents and translating them into a structured database schema. This capability has the potential to streamline the database design phase, saving significant time and effort while ensuring that the resulting schema aligns closely with the given requirements. By integrating LLM technology with Natural Language Processing (NLP) techniques, the efficiency and accuracy of generating database schemas based on textual requirement specifications can be significantly enhanced. The proposed tool will utilize these capabilities to read system requirement specifications, which may be provided as text descriptions or as Entity-Relationship Diagrams (ERDs). It will then analyze the input and automatically generate a relational database schema in the form of SQL commands. This innovation eliminates much of the manual effort involved in database design, reduces human errors, and accelerates development timelines. The aim of this work is to provide a tool can be invaluable for software developers, database architects, and organizations aiming to optimize their workflow and align technical deliverables with business requirements seamlessly.
-
Comparing the effectiveness of computer mass appraisal methods
Computer Research and Modeling, 2015, v. 7, no. 1, pp. 185-196Views (last year): 2.Location-based models — one of areas of CAMA (computer-assisted mass apriasal) building. When taking into account the location of the object using spatial autoregressive models structure of models (type of spatial autocorrelation, choice of “nearest neighbors”) cannot always be determined before its construction. Moreover, in practice there are situations where more efficient methods are taking into account different rates depending on the type of the object from its location. In this regard there are important issues in spatial methods area:
– fields of methods efficacy;
– sensitivity of the methods on the choice of the type of spatial model and on the selected number of nearest neighbors.
This article presents a methodology for assessing the effectiveness of computer evaluation of real estate objects. There are results of approbation on methods based on location information of the objects.
-
A novel method of stylometry based on the statistic of numerals
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 837-850A new method of statistical analysis of texts is suggested. The frequency distribution of the first significant digits in numerals of English-language texts is considered. We have taken into account cardinal as well as ordinal numerals expressed both in figures, and verbally. To identify the author’s use of numerals, we previously deleted from the text all idiomatic expressions and set phrases accidentally containing numerals, as well as itemizations and page numbers, etc. Benford’s law is found to hold approximately for the frequencies of various first significant digits of compound literary texts by different authors; a marked predominance of the digit 1 is observed. In coherent authorial texts, characteristic deviations from Benford’s law arise which are statistically stable significant author peculiarities that allow, under certain conditions, to consider the problem of authorship and distinguish between texts by different authors. The text should be large enough (at least about 200 kB). At the end of $\{1, 2, \ldots, 9\}$ digits row, the frequency distribution is subject to strong fluctuations and thus unrepresentative for our purpose. The aim of the theoretical explanation of the observed empirical regularity is not intended, which, however, does not preclude the applicability of the proposed methodology for text attribution. The approach suggested and the conclusions are backed by the examples of the computer analysis of works by W.M. Thackeray, M. Twain, R. L. Stevenson, J. Joyce, sisters Bront¨e, and J.Austen. On the basis of technique suggested, we examined the authorship of a text earlier ascribed to L. F. Baum (the result agrees with that obtained by different means). We have shown that the authorship of Harper Lee’s “To Kill a Mockingbird” pertains to her, whereas the primary draft, “Go Set a Watchman”, seems to have been written in collaboration with Truman Capote. All results are confirmed on the basis of parametric Pearson’s chi-squared test as well as non-parametric Mann –Whitney U test and Kruskal –Wallis test.
Keywords: text attribution, first significant digit of numerals.Views (last year): 10. -
Methods of evaluating the effectiveness of systems for computing resources monitoring
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 661-668Views (last year): 2. Citations: 2 (RSCI).This article discusses the contribution of computing resources monitoring system to the work of a distributed computing system. Method of evaluation of this contribution and performance monitoring system based on measures of certainty the state-controlled system is proposed. The application of this methodology in the design and development of local monitoring of the Central Information and Computing Complex, Joint Institute for Nuclear Research is listed.
-
Schools on mathematical biology 1973–1992
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 411-422Views (last year): 2.This is a brief review of the subjects, and an impression of some talks, which were given at the Schools on modelling complex biological systems. Those Schools reflected a logical progress in this way of thinking in our country and provided a place for collective “brain-storming” inspired by prominent scientists of the last century, such as A. A. Lyapunov, N. V. Timofeeff-Ressovsky, A. M. Molchanov. At the Schools, general issues of methodology of mathematical modeling in biology and ecology were raised in the form of heated debates, the fundamental principles for how the structure of matter is organized and how complex biological systems function and evolve were discussed. The Schools served as an important sample of interdisciplinary actions by the scientists of distinct perceptions of the World, or distinct approaches and modes to reach the boundaries of the Unknown, rather than of different specializations. What was bringing together the mathematicians and biologists attending the Schools was the common understanding that the alliance should be fruitful. Reported in the issues of School proceedings, the presentations, discussions, and reflections have not yet lost their relevance so far and might serve as certain guidance for the new generation of scientists.
-
Changepoint detection on financial data using deep learning approach
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 555-575The purpose of this study is to develop a methodology for change points detection in time series, including financial data. The theoretical basis of the study is based on the pieces of research devoted to the analysis of structural changes in financial markets, description of the proposed algorithms for detecting change points and peculiarities of building classical and deep machine learning models for solving this type of problems. The development of such tools is of interest to investors and other stakeholders, providing them with additional approaches to the effective analysis of financial markets and interpretation of available data.
To address the research objective, a neural network was trained. In the course of the study several ways of training sample formation were considered, differing in the nature of statistical parameters. In order to improve the quality of training and obtain more accurate results, a methodology for feature generation was developed for the formation of features that serve as input data for the neural network. These features, in turn, were derived from an analysis of mathematical expectations and standard deviations of time series data over specific intervals. The potential for combining these features to achieve more stable results is also under investigation.
The results of model experiments were analyzed to compare the effectiveness of the proposed model with other existing changepoint detection algorithms that have gained widespread usage in practical applications. A specially generated dataset, developed using proprietary methods, was utilized as both training and testing data. Furthermore, the model, trained on various features, was tested on daily data from the S&P 500 index to assess its effectiveness in a real financial context.
As the principles of the model’s operation are described, possibilities for its further improvement are considered, including the modernization of the proposed model’s structure, optimization of training data generation, and feature formation. Additionally, the authors are tasked with advancing existing concepts for real-time changepoint detection.
-
Optimal control of bank investment as a factorof economic stability
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 959-967Views (last year): 5.This paper presents a model of replenishment of bank liquidity by additional income of banks. Given the methodological basis for the necessity for bank stabilization funds to cover losses during the economy crisis. An econometric derivation of the equations describing the behavior of the bank financial and operating activity performed. In accordance with the purpose of creating a stabilization fund introduces an optimality criterion used controls. Based on the equations of the behavior of the bank by the method of dynamic programming is derived a vector of optimal controls.
-
Aspects of methodology of ensuring interoperability in the Gridenvironment and cloud computing
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 675-682The technique of ensuring of interoperability for Grid-systems and systems of cloud computing is provided. The technique is constructed on a basis of the uniform approach of ensuring interoperability for systems of the wide class offered by authors and recorded in the national Russian Federation standard.
Keywords: interoperability, grid, grid-environment, cloud computing, clouds, methodology, standardization.Views (last year): 1. Citations: 3 (RSCI).
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




