Результаты поиска по 'optimization tasks':
Найдено статей: 38
  1. Salem N., Al-Tarawneh K., Hudaib A., Salem H., Tareef A., Salloum H., Mazzara M.
    Generating database schema from requirement specification based on natural language processing and large language model
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1703-1713

    A Large Language Model (LLM) is an advanced artificial intelligence algorithm that utilizes deep learning methodologies and extensive datasets to process, understand, and generate humanlike text. These models are capable of performing various tasks, such as summarization, content creation, translation, and predictive text generation, making them highly versatile in applications involving natural language understanding. Generative AI, often associated with LLMs, specifically focuses on creating new content, particularly text, by leveraging the capabilities of these models. Developers can harness LLMs to automate complex processes, such as extracting relevant information from system requirement documents and translating them into a structured database schema. This capability has the potential to streamline the database design phase, saving significant time and effort while ensuring that the resulting schema aligns closely with the given requirements. By integrating LLM technology with Natural Language Processing (NLP) techniques, the efficiency and accuracy of generating database schemas based on textual requirement specifications can be significantly enhanced. The proposed tool will utilize these capabilities to read system requirement specifications, which may be provided as text descriptions or as Entity-Relationship Diagrams (ERDs). It will then analyze the input and automatically generate a relational database schema in the form of SQL commands. This innovation eliminates much of the manual effort involved in database design, reduces human errors, and accelerates development timelines. The aim of this work is to provide a tool can be invaluable for software developers, database architects, and organizations aiming to optimize their workflow and align technical deliverables with business requirements seamlessly.

  2. Shumov V.V., Korepanov V.O.
    Mathematical models of combat and military operations
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 217-242

    Simulation of combat and military operations is the most important scientific and practical task aimed at providing the command of quantitative bases for decision-making. The first models of combat were developed during the First World War (M. Osipov, F. Lanchester), and now they are widely used in connection with the massive introduction of automation tools. At the same time, the models of combat and war do not fully take into account the moral potentials of the parties to the conflict, which motivates and motivates the further development of models of battle and war. A probabilistic model of combat is considered, in which the parameter of combat superiority is determined through the parameter of moral (the ratio of the percentages of the losses sustained by the parties) and the parameter of technological superiority. To assess the latter, the following is taken into account: command experience (ability to organize coordinated actions), reconnaissance, fire and maneuverability capabilities of the parties and operational (combat) support capabilities. A game-based offensive-defense model has been developed, taking into account the actions of the first and second echelons (reserves) of the parties. The target function of the attackers in the model is the product of the probability of a breakthrough by the first echelon of one of the defense points by the probability of the second echelon of the counterattack repelling the reserve of the defenders. Solved the private task of managing the breakthrough of defense points and found the optimal distribution of combat units between the trains. The share of troops allocated by the parties to the second echelon (reserve) increases with an increase in the value of the aggregate combat superiority parameter of those advancing and decreases with an increase in the value of the combat superiority parameter when repelling a counterattack. When planning a battle (battles, operations) and the distribution of its troops between echelons, it is important to know not the exact number of enemy troops, but their capabilities and capabilities, as well as the degree of preparedness of the defense, which does not contradict the experience of warfare. Depending on the conditions of the situation, the goal of an offensive may be to defeat the enemy, quickly capture an important area in the depth of the enemy’s defense, minimize their losses, etc. For scaling the offensive-defense model for targets, the dependencies of the losses and the onset rate on the initial ratio of the combat potentials of the parties were found. The influence of social costs on the course and outcome of wars is taken into account. A theoretical explanation is given of a loss in a military company with a technologically weak adversary and with a goal of war that is unclear to society. To account for the influence of psychological operations and information wars on the moral potential of individuals, a model of social and information influence was used.

  3. Savchuk O.S., Alkousa M.S., Stonyakin F.S.
    On some mirror descent methods for strongly convex programming problems with Lipschitz functional constraints
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1727-1746

    The paper is devoted to one approach to constructing subgradient methods for strongly convex programming problems with several functional constraints. More precisely, the strongly convex minimization problem with several strongly convex (inequality-type) constraints is considered, and first-order optimization methods for this class of problems are proposed. The special feature of the proposed methods is the possibility of using the strong convexity parameters of the violated functional constraints at nonproductive iterations, in theoretical estimates of the quality of the produced solution by the methods. The main task, to solve the considered problem, is to propose a subgradient method with adaptive rules for selecting steps and stopping rule of the method. The key idea of the proposed methods in this paper is to combine two approaches: a scheme with switching on productive and nonproductive steps and recently proposed modifications of mirror descent for convex programming problems, allowing to ignore some of the functional constraints on nonproductive steps of the algorithms. In the paper, it was described a subgradient method with switching by productive and nonproductive steps for strongly convex programming problems in the case where the objective function and functional constraints satisfy the Lipschitz condition. An analog of the proposed subgradient method, a mirror descent scheme for problems with relatively Lipschitz and relatively strongly convex objective functions and constraints is also considered. For the proposed methods, it obtained theoretical estimates of the quality of the solution, they indicate the optimality of these methods from the point of view of lower oracle estimates. In addition, since in many problems, the operation of finding the exact subgradient vector is quite expensive, then for the class of problems under consideration, analogs of the mentioned above methods with the replacement of the usual subgradient of the objective function or functional constraints by the $\delta$-subgradient were investigated. The noted approach can save computational costs of the method by refusing to require the availability of the exact value of the subgradient at the current point. It is shown that the quality estimates of the solution change by $O(\delta)$. The results of numerical experiments illustrating the advantages of the proposed methods in comparison with some previously known ones are also presented.

  4. Irkhin I.A., Bulatov V.G., Vorontsov K.V.
    Additive regularizarion of topic models with fast text vectorizartion
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528

    The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.

  5. Ketova K.V., Kasatkina E.V.
    The solution of the logistics task of fuel supply for the regional distributed heat supply system
    Computer Research and Modeling, 2012, v. 4, no. 2, pp. 451-470

    The technique for solving the logistic task of fuel supply in the region, including the interconnected tasks of routing, clustering, optimal distribution of resources and stock control is proposed. The calculations have been carried out on the example of fuel supply system of the Udmurt Republic.

    Views (last year): 1. Citations: 6 (RSCI).
  6. Moiseev N.A., Nazarova D.I., Semina N.S., Maksimov D.A.
    Changepoint detection on financial data using deep learning approach
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 555-575

    The purpose of this study is to develop a methodology for change points detection in time series, including financial data. The theoretical basis of the study is based on the pieces of research devoted to the analysis of structural changes in financial markets, description of the proposed algorithms for detecting change points and peculiarities of building classical and deep machine learning models for solving this type of problems. The development of such tools is of interest to investors and other stakeholders, providing them with additional approaches to the effective analysis of financial markets and interpretation of available data.

    To address the research objective, a neural network was trained. In the course of the study several ways of training sample formation were considered, differing in the nature of statistical parameters. In order to improve the quality of training and obtain more accurate results, a methodology for feature generation was developed for the formation of features that serve as input data for the neural network. These features, in turn, were derived from an analysis of mathematical expectations and standard deviations of time series data over specific intervals. The potential for combining these features to achieve more stable results is also under investigation.

    The results of model experiments were analyzed to compare the effectiveness of the proposed model with other existing changepoint detection algorithms that have gained widespread usage in practical applications. A specially generated dataset, developed using proprietary methods, was utilized as both training and testing data. Furthermore, the model, trained on various features, was tested on daily data from the S&P 500 index to assess its effectiveness in a real financial context.

    As the principles of the model’s operation are described, possibilities for its further improvement are considered, including the modernization of the proposed model’s structure, optimization of training data generation, and feature formation. Additionally, the authors are tasked with advancing existing concepts for real-time changepoint detection.

  7. Bogdanov A.V., Thurein Kyaw L.
    Query optimization in relational database systems and cloud computing technology
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 649-655

    Optimization is the heart of relational Database Management System (DMBS). Its can analyzes the SQL statements and determines the most efficient access plan to satisfy every query request. Optimization can solves this problem and analyzes SQL statements specifying which tables and columns are available. And then request the information system and statistical data stored in the system directory, to determine the best method of solving the tasks required to comply with the query requests.

    Views (last year): 1.
  8. Degtyarev A.B., Yezhakova T.R., Khramushin V.N.
    Algorithmic construction of explicit numerical schemes and visualization of objects and processes in the computational experiment in fluid mechanics
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 767-774

    The paper discusses the design and verification stages in the development of complex numerical algorithms to create direct computational experiments in fluid mechanics. The modeling of physical fields and nonstationary processes of continuum mechanics, it is desirable to rely on strict rules of construction the numerical objects and related computational algorithms. Synthesis of adaptive the numerical objects and effective arithmetic- logic operations can serve to optimize the whole computing tasks, provided strict following and compliance with the original of the laws of fluid mechanics. The possibility of using ternary logic enables to resolve some contradictions of functional and declarative programming in the implementation of purely applied problems of mechanics. Similar design decisions lead to new numerical schemes tensor mathematics to help optimize effectiveness and validate correctness the simulation results. The most important consequence is the possibility of using interactive graphical techniques for the visualization of intermediate results of modeling, as well as managed to influence the course of computing experiment under the supervision of engineers aerohydrodynamics– researchers.

    Views (last year): 1.
Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"