Processing math: 10%
Результаты поиска по 'labeling':
Найдено статей: 12
  1. Editor’s note
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1533-1538
  2. Berger A.I., Guda S.A.
    Optimal threshold selection algorithms for multi-label classification: property study
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238

    Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the F-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation \boldsymbol V, defined on a unit square on the plane of average precision P and recall R. Using this transformation, two algorithms are proposed for optimization: the F-measure linearization method and the method of \boldsymbol V domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the F-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of \boldsymbol V, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of \boldsymbol V domain analysis — the polar radius.

  3. Adekotujo A.S., Enikuomehin T., Aribisala B., Mazzara M., Zubair A.F.
    Computational treatment of natural language text for intent detection
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1539-1554

    Intent detection plays a crucial role in task-oriented conversational systems. To understand the user’s goal, the system relies on its intent detector to classify the user’s utterance, which may be expressed in different forms of natural language, into intent classes. However, lack of data, and the efficacy of intent detection systems has been hindered by the fact that the user’s intent text is typically characterized by short, general sentences and colloquial expressions. The process of algorithmically determining user intent from a given statement is known as intent detection. The goal of this study is to develop an intent detection model that will accurately classify and detect user intent. The model calculates the similarity score of the three models used to determine their similarities. The proposed model uses Contextual Semantic Search (CSS) capabilities for semantic search, Latent Dirichlet Allocation (LDA) for topic modeling, the Bidirectional Encoder Representations from Transformers (BERT) semantic matching technique, and the combination of LDA and BERT for text classification and detection. The dataset acquired is from the broad twitter corpus (BTC) and comprises various meta data. To prepare the data for analysis, a pre-processing step was applied. A sample of 1432 instances were selected out of the 5000 available datasets because manual annotation is required and could be time-consuming. To compare the performance of the model with the existing model, the similarity scores, precision, recall, f1 score, and accuracy were computed. The results revealed that LDA-BERT achieved an accuracy of 95.88% for intent detection, BERT with an accuracy of 93.84%, and LDA with an accuracy of 92.23%. This shows that LDA-BERT performs better than other models. It is hoped that the novel model will aid in ensuring information security and social media intelligence. For future work, an unsupervised LDA-BERT without any labeled data can be studied with the model.

  4. Ahmad U., Ivanov V.
    Automating high-quality concept banks: leveraging LLMs and multimodal evaluation metrics
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1555-1567

    Interpretability in recent deep learning models has become an epicenter of research particularly in sensitive domains such as healthcare, and finance. Concept bottleneck models have emerged as a promising approach for achieving transparency and interpretability by leveraging a set of humanunderstandable concepts as an intermediate representation before the prediction layer. However, manual concept annotation is discouraged due to the time and effort involved. Our work explores the potential of large language models (LLMs) for generating high-quality concept banks and proposes a multimodal evaluation metric to assess the quality of generated concepts. We investigate three key research questions: the ability of LLMs to generate concept banks comparable to existing knowledge bases like ConceptNet, the sufficiency of unimodal text-based semantic similarity for evaluating concept-class label associations, and the effectiveness of multimodal information in quantifying concept generation quality compared to unimodal concept-label semantic similarity. Our findings reveal that multimodal models outperform unimodal approaches in capturing concept-class label similarity. Furthermore, our generated concepts for the CIFAR-10 and CIFAR-100 datasets surpass those obtained from ConceptNet and the baseline comparison, demonstrating the standalone capability of LLMs in generating highquality concepts. Being able to automatically generate and evaluate high-quality concepts will enable researchers to quickly adapt and iterate to a newer dataset with little to no effort before they can feed that into concept bottleneck models.

  5. Polezhaev V.A.
    Automated citation graph building from a corpora of scientific documents
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 707-719

    In this paper the problem of automated building of a citation graph from a collection of scientific documents is considered as a sequence of machine learning tasks. The overall data processing technology is described which consists of six stages: preprocessing, metainformation extraction, bibliography lists extraction, splitting bibliography lists into separate bibliography records, standardization of each bibliography record, and record linkage. The goal of this paper is to provide a survey of approaches and algorithms suitable for each stage, motivate the choice of the best combination of algorithms, and adapt some of them for multilingual bibliographies processing. For some of the tasks new algorithms and heuristics are proposed and evaluated on the mixed English and Russian documents corpora.

    Views (last year): 5. Citations: 1 (RSCI).
  6. Nebaba S.G., Markov N.G.
    Convolutional neural networks of YOLO family for mobile computer vision systems
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 615-631

    The work analyzes known classes of convolutional neural network models and studies selected from them promising models for detecting flying objects in images. Object detection here refers to the detection, localization in space and classification of flying objects. The work conducts a comprehensive study of selected promising convolutional neural network models in order to identify the most effective ones from them for creating mobile real-time computer vision systems. It is shown that the most suitable models for detecting flying objects in images, taking into account the formulated requirements for mobile real-time computer vision systems, are models of the YOLO family, and five models from this family should be considered: YOLOv4, YOLOv4-Tiny, YOLOv4-CSP, YOLOv7 and YOLOv7-Tiny. An appropriate dataset has been developed for training, validation and comprehensive research of these models. Each labeled image of the dataset includes from one to several flying objects of four classes: “bird”, “aircraft-type unmanned aerial vehicle”, “helicopter-type unmanned aerial vehicle”, and “unknown object” (objects in airspace not included in the first three classes). Research has shown that all convolutional neural network models exceed the specified threshold value by the speed of detecting objects in the image, however, only the YOLOv4-CSP and YOLOv7 models partially satisfy the requirements of the accuracy of detection of flying objects. It was shown that most difficult object class to detect is the “bird” class. At the same time, it was revealed that the most effective model is YOLOv7, the YOLOv4-CSP model is in second place. Both models are recommended for use as part of a mobile real-time computer vision system with condition of additional training of these models on increased number of images with objects of the “bird” class so that they satisfy the requirement for the accuracy of detecting flying objects of each four classes.

  7. Stepkin A.V., Stepkina A.S.
    Algorithm of simple graph exploration by a collective of agents
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 33-45

    The study presented in the paper is devoted to the problem of finite graph exploration using a collective of agents. Finite non-oriented graphs without loops and multiple edges are considered in this paper. The collective of agents consists of two agents-researchers, who have a finite memory independent of the number of nodes of the graph studied by them and use two colors each (three colors are used in the aggregate) and one agentexperimental, who has a finite, unlimitedly growing internal memory. Agents-researches can simultaneously traverse the graph, read and change labels of graph elements, and also transmit the necessary information to a third agent — the agent-experimenter. An agent-experimenter is a non-moving agent in whose memory the result of the functioning of agents-researchers at each step is recorded and, also, a representation of the investigated graph (initially unknown to agents) is gradually built up with a list of edges and a list of nodes.

    The work includes detail describes of the operating modes of agents-researchers with an indication of the priority of their activation. The commands exchanged between agents-researchers and an agent-experimenter during the execution of procedures are considered. Problematic situations arising in the work of agentsresearchers are also studied in detail, for example, staining a white vertex, when two agents simultaneously fall into the same node, or marking and examining the isthmus (edges connecting subgraphs examined by different agents-researchers), etc. The full algorithm of the agent-experimenter is presented with a detailed description of the processing of messages received from agents-researchers, on the basis of which a representation of the studied graph is built. In addition, a complete analysis of the time, space, and communication complexities of the constructed algorithm was performed.

    The presented graph exploration algorithm has a quadratic (with respect to the number of nodes of the studied graph) time complexity, quadratic space complexity, and quadratic communication complexity. The graph exploration algorithm is based on the depth-first traversal method.

  8. Kochergin A.V., Kholmatova Z.Sh.
    Extraction of characters and events from narratives
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1593-1600

    Events and character extraction from narratives is a fundamental task in text analysis. The application of event extraction techniques ranges from the summarization of different documents to the analysis of medical notes. We identify events based on a framework named “four W” (Who, What, When, Where) to capture all the essential components like the actors, actions, time, and places. In this paper, we explore two prominent techniques for event extraction: statistical parsing of syntactic trees and semantic role labeling. While these techniques were investigated by different researchers in isolation, we directly compare the performance of the two approaches on our custom dataset, which we have annotated.

    Our analysis shows that statistical parsing of syntactic trees outperforms semantic role labeling in event and character extraction, especially in identifying specific details. Nevertheless, semantic role labeling demonstrate good performance in correct actor identification. We evaluate the effectiveness of both approaches by comparing different metrics like precision, recall, and F1-scores, thus, demonstrating their respective advantages and limitations.

    Moreover, as a part of our work, we propose different future applications of event extraction techniques that we plan to investigate. The areas where we want to apply these techniques include code analysis and source code authorship attribution. We consider using event extraction to retrieve key code elements as variable assignments and function calls, which can further help us to analyze the behavior of programs and identify the project’s contributors. Our work provides novel understandings of the performance and efficiency of statistical parsing and semantic role labeling techniques, offering researchers new directions for the application of these techniques.

  9. In Russian medicine two radiopharmaceuticals are currently used for radionuclide therapy of bone metastases: 89Sr-chloride and 153Sm-oxabifor. The first one has many side effects, so its use is limited. The second one is available only in clinics, its transportation to which does not take much time. Currently, the third radiopharmaceutical 188Re-solerene is undergoing clinical trials. Due to the generator method of obtaining 188Re, this radiopharmaceutical should become available for use in many regions of our country. Therefore, there is a need for a comparative analysis of the characteristics of these radiopharmaceuticals, including on the basis of mathematical modeling.

    The article discusses the features of mathematical modeling the kinetics of osteotropic radiopharmaceutical drugs in the human body with bone metastases. Based on the four-compartment model, a complex of modeling and calculation of pharmacokinetic and dosimetric characteristics of radiopharmaceuticals for radionuclide therapy of bone metastases was developed and tested. Using clinical data, the transport constants of the model were identified and the individual characteristics of Russian radiopharmaceuticals labeled 89Sr, 153Sm and 188Re were calculated (effective half-lives, maximum activity in the compartments and the times of their achievement, absorbed doses to bone tissue and metastases, endosteal bone layer, red bone marrow, blood, kidneys and bladder). The time activity dependencies for all compartments of the model are obtained and analyzed. A comparative analysis of the pharmacokinetics and dosimetry of three radiopharmaceuticals (89Sr-chloride, 153Sm-oxabiphore, 188Re-solerene) was carried out.

    From a comparative analysis of the pharmacokinetic and dosimetric characteristics of these radiopharmaceutical drugs, it follows that the best of them for widespread use in many regions of our country should be 188Re-solerene, taking into account the generator method of obtaining 188Re in a hospital.

  10. Krasnov F.V., Smaznevich I.S., Baskakova E.N.
    Bibliographic link prediction using contrast resampling technique
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1317-1336

    The paper studies the problem of searching for fragments with missing bibliographic links in a scientific article using automatic binary classification. To train the model, we propose a new contrast resampling technique, the innovation of which is the consideration of the context of the link, taking into account the boundaries of the fragment, which mostly affects the probability of presence of a bibliographic links in it. The training set was formed of automatically labeled samples that are fragments of three sentences with class labels «without link» and «with link» that satisfy the requirement of contrast: samples of different classes are distanced in the source text. The feature space was built automatically based on the term occurrence statistics and was expanded by constructing additional features — entities (names, numbers, quotes and abbreviations) recognized in the text.

    A series of experiments was carried out on the archives of the scientific journals «Law enforcement review» (273 articles) and «Journal Infectology» (684 articles). The classification was carried out by the models Nearest Neighbors, RBF SVM, Random Forest, Multilayer Perceptron, with the selection of optimal hyperparameters for each classifier.

    Experiments have confirmed the hypothesis put forward. The highest accuracy was reached by the neural network classifier (95%), which is however not as fast as the linear one that showed also high accuracy with contrast resampling (91–94%). These values are superior to those reported for NER and Sentiment Analysis on comparable data. The high computational efficiency of the proposed method makes it possible to integrate it into applied systems and to process documents online.

Pages: next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"