Результаты поиска по 'artificial intelligence':
Найдено статей: 14
  1. Salem N., Al-Tarawneh K., Hudaib A., Salem H., Tareef A., Salloum H., Mazzara M.
    Generating database schema from requirement specification based on natural language processing and large language model
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1703-1713

    A Large Language Model (LLM) is an advanced artificial intelligence algorithm that utilizes deep learning methodologies and extensive datasets to process, understand, and generate humanlike text. These models are capable of performing various tasks, such as summarization, content creation, translation, and predictive text generation, making them highly versatile in applications involving natural language understanding. Generative AI, often associated with LLMs, specifically focuses on creating new content, particularly text, by leveraging the capabilities of these models. Developers can harness LLMs to automate complex processes, such as extracting relevant information from system requirement documents and translating them into a structured database schema. This capability has the potential to streamline the database design phase, saving significant time and effort while ensuring that the resulting schema aligns closely with the given requirements. By integrating LLM technology with Natural Language Processing (NLP) techniques, the efficiency and accuracy of generating database schemas based on textual requirement specifications can be significantly enhanced. The proposed tool will utilize these capabilities to read system requirement specifications, which may be provided as text descriptions or as Entity-Relationship Diagrams (ERDs). It will then analyze the input and automatically generate a relational database schema in the form of SQL commands. This innovation eliminates much of the manual effort involved in database design, reduces human errors, and accelerates development timelines. The aim of this work is to provide a tool can be invaluable for software developers, database architects, and organizations aiming to optimize their workflow and align technical deliverables with business requirements seamlessly.

  2. Danilov G.V., Zhukov V.V., Kulikov A.S., Makashova E.S., Mitin N.A., Orlov Y.N.
    Comparative analysis of statistical methods of scientific publications classification in medicine
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 921-933

    In this paper the various methods of machine classification of scientific texts by thematic sections on the example of publications in specialized medical journals published by Springer are compared. The corpus of texts was studied in five sections: pharmacology/toxicology, cardiology, immunology, neurology and oncology. We considered both classification methods based on the analysis of annotations and keywords, and classification methods based on the processing of actual texts. Methods of Bayesian classification, reference vectors, and reference letter combinations were applied. It is shown that the method of classification with the best accuracy is based on creating a library of standards of letter trigrams that correspond to texts of a certain subject. It is turned out that for this corpus the Bayesian method gives an error of about 20%, the support vector machine has error of order 10%, and the proximity of the distribution of three-letter text to the standard theme gives an error of about 5%, which allows to rank these methods to the use of artificial intelligence in the task of text classification by industry specialties. It is important that the support vector method provides the same accuracy when analyzing annotations as when analyzing full texts, which is important for reducing the number of operations for large text corpus.

  3. Guskov V.P., Gushchanskiy D.E., Kulabukhova N.V., Abrahamyan S.A., Balyan S.G., Degtyarev A.B., Bogdanov A.V.
    An interactive tool for developing distributed telemedicine systems
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 521-527

    Getting a qualified medical examination can be difficult for people in remote areas because medical staff available can either be inaccessible or it might lack expert knowledge at proper level. Telemedicine technologies can help in such situations. On one hand, such technologies allow highly qualified doctors to consult remotely, thereby increasing the quality of diagnosis and plan treatment. On the other hand, computer-aided analysis of the research results, anamnesis and information on similar cases assist medical staff in their routine activities and decision-making.

    Creating telemedicine system for a particular domain is a laborious process. It’s not sufficient to pick proper medical experts and to fill the knowledge base of the analytical module. It’s also necessary to organize the entire infrastructure of the system to meet the requirements in terms of reliability, fault tolerance, protection of personal data and so on. Tools with reusable infrastructure elements, which are common to such systems, are able to decrease the amount of work needed for the development of telemedicine systems.

    An interactive tool for creating distributed telemedicine systems is described in the article. A list of requirements for the systems is presented; structural solutions for meeting the requirements are suggested. A composition of such elements applicable for distributed systems is described in the article. A cardiac telemedicine system is described as a foundation of the tool

    Views (last year): 3. Citations: 4 (RSCI).
  4. Ershov N.M., Popova N.N.
    Natural models of parallel computations
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 781-785

    Course “Natural models of parallel computing”, given for senior students of the Faculty of Computational Mathematics and Cybernetics, Moscow State University, is devoted to the issues of supercomputer implementation of natural computational models and is, in fact, an introduction to the theory of natural computing, a relatively new branch of science, formed at the intersection of mathematics, computer science and natural sciences (especially biology). Topics of the natural computing include both already classic subjects such as cellular automata, and relatively new, introduced in the last 10–20 years, such as swarm intelligence. Despite its biological origin, all these models are widely applied in the fields related to computer data processing. Research in the field of natural computing is closely related to issues and technology of parallel computing. Presentation of theoretical material of the course is accompanied by a consideration of the possible schemes for parallel computing, in the practical part of the course it is supposed to perform by the students a software implementation using MPI technology and numerical experiments to investigate the effectiveness of the chosen schemes of parallel computing.

    Views (last year): 17. Citations: 2 (RSCI).
Pages: previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"