Результаты поиска по 'information':
Найдено статей: 138
  1. Gesture recognition is an urgent challenge in developing systems of human-machine interfaces. We analyzed machine learning methods for gesture classification based on electromyographic muscle signals to identify the most effective one. Methods such as the naive Bayesian classifier (NBC), logistic regression, decision tree, random forest, gradient boosting, support vector machine (SVM), $k$-nearest neighbor algorithm, and ensembles (NBC and decision tree, NBC and gradient boosting, gradient boosting and decision tree) were considered. Electromyography (EMG) was chosen as a method of obtaining information about gestures. This solution does not require the location of the hand in the field of view of the camera and can be used to recognize finger movements. To test the effectiveness of the selected methods of gesture recognition, a device was developed for recording the EMG signal, which includes three electrodes and an EMG sensor connected to the microcontroller and the power supply. The following gestures were chosen: clenched fist, “thumb up”, “Victory”, squeezing an index finger and waving a hand from right to left. Accuracy, precision, recall and execution time were used to evaluate the effectiveness of classifiers. These parameters were calculated for three options for the location of EMG electrodes on the forearm. According to the test results, the most effective methods are $k$-nearest neighbors’ algorithm, random forest and the ensemble of NBC and gradient boosting, the average accuracy of ensemble for three electrode positions was 81.55%. The position of the electrodes was also determined at which machine learning methods achieve the maximum accuracy. In this position, one of the differential electrodes is located at the intersection of the flexor digitorum profundus and flexor pollicis longus, the second — above the flexor digitorum superficialis.

  2. Segmentation of medical image is one of the most challenging tasks in analysis of medical image. It classifies the organs pixels or lesions from medical images background like MRI or CT scans, that is to provide critical information about the human organ’s volumes and shapes. In scientific imaging field, medical imaging is considered one of the most important topics due to the rapid and continuing progress in computerized medical image visualization, advances in analysis approaches and computer-aided diagnosis. Digital image processing becomes more important in healthcare field due to the growing use of direct digital imaging systems for medical diagnostics. Due to medical imaging techniques, approaches of image processing are now applicable in medicine. Generally, various transformations will be needed to extract image data. Also, a digital image can be considered an approximation of a real situation includes some uncertainty derived from the constraints on the process of vision. Since information on the level of uncertainty will influence an expert’s attitude. To address this challenge, we propose novel framework involving interval concept that consider a good tool for dealing with the uncertainty, In the proposed approach, the medical images are transformed into interval valued representation approach and entropies are defined for an image object and background. Then we determine a threshold for lower-bound image and for upper-bound image, and then calculate the mean value for the final output results. To demonstrate the effectiveness of the proposed framework, we evaluate it by using synthetic image and its ground truth. Experimental results showed how performance of the segmentation-based entropy threshold can be enhanced using proposed approach to overcome ambiguity.

  3. Musaev A.A., Grigoriev D.A.
    Extracting knowledge from text messages: overview and state-of-the-art
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315

    In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.

  4. Stonyakin F.S., Savchuk O.S., Baran I.V., Alkousa M.S., Titov A.A.
    Analogues of the relative strong convexity condition for relatively smooth problems and adaptive gradient-type methods
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 413-432

    This paper is devoted to some variants of improving the convergence rate guarantees of the gradient-type algorithms for relatively smooth and relatively Lipschitz-continuous problems in the case of additional information about some analogues of the strong convexity of the objective function. We consider two classes of problems, namely, convex problems with a relative functional growth condition, and problems (generally, non-convex) with an analogue of the Polyak – Lojasiewicz gradient dominance condition with respect to Bregman divergence. For the first type of problems, we propose two restart schemes for the gradient type methods and justify theoretical estimates of the convergence of two algorithms with adaptively chosen parameters corresponding to the relative smoothness or Lipschitz property of the objective function. The first of these algorithms is simpler in terms of the stopping criterion from the iteration, but for this algorithm, the near-optimal computational guarantees are justified only on the class of relatively Lipschitz-continuous problems. The restart procedure of another algorithm, in its turn, allowed us to obtain more universal theoretical results. We proved a near-optimal estimate of the complexity on the class of convex relatively Lipschitz continuous problems with a functional growth condition. We also obtained linear convergence rate guarantees on the class of relatively smooth problems with a functional growth condition. For a class of problems with an analogue of the gradient dominance condition with respect to the Bregman divergence, estimates of the quality of the output solution were obtained using adaptively selected parameters. We also present the results of some computational experiments illustrating the performance of the methods for the second approach at the conclusion of the paper. As examples, we considered a linear inverse Poisson problem (minimizing the Kullback – Leibler divergence), its regularized version which allows guaranteeing a relative strong convexity of the objective function, as well as an example of a relatively smooth and relatively strongly convex problem. In particular, calculations show that a relatively strongly convex function may not satisfy the relative variant of the gradient dominance condition.

  5. Vorontsova D.V., Isaeva M.V., Menshikov I.A., Orlov K.Y., Bernadotte A.
    Frequency, time, and spatial electroencephalogram changes after COVID-19 during a simple speech task
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 691-701

    We found a predominance of α-rhythm patterns in the left hemisphere in healthy people compared to people with COVID-19 history. Moreover, we observe a significant decrease in the left hemisphere contribution to the speech center area in people who have undergone COVID-19 when performing speech tasks.

    Our findings show that the signal in healthy subjects is more spatially localized and synchronized between hemispheres when performing tasks compared to people who recovered from COVID-19. We also observed a decrease in low frequencies in both hemispheres after COVID-19.

    EEG-patterns of COVID-19 are detectable in an unusual frequency domain. What is usually considered noise in electroencephalographic (EEG) data carries information that can be used to determine whether or not a person has had COVID-19. These patterns can be interpreted as signs of hemispheric desynchronization, premature brain ageing, and more significant brain strain when performing simple tasks compared to people who did not have COVID-19.

    In our work, we have shown the applicability of neural networks in helping to detect the long-term effects of COVID-19 on EEG-data. Furthermore, our data following other studies supported the hypothesis of the severity of the long-term effects of COVID-19 detected on the EEG-data of EEG-based BCI. The presented findings of functional activity of the brain– computer interface make it possible to use machine learning methods on simple, non-invasive brain–computer interfaces to detect post-COVID syndrome and develop progress in neurorehabilitation.

  6. Nechaevskiy A.V., Streltsova O.I., Kulikov K.V., Bashashin M.V., Butenko Y.A., Zuev M.I.
    Development of a computational environment for mathematical modeling of superconducting nanostructures with a magnet
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1349-1358

    Now days the main research activity in the field of nanotechnology is aimed at the creation, study and application of new materials and new structures. Recently, much attention has been attracted by the possibility of controlling magnetic properties using a superconducting current, as well as the influence of magnetic dynamics on the current–voltage characteristics of hybrid superconductor/ferromagnet (S/F) nanostructures. In particular, such structures include the S/F/S Josephson junction or molecular nanomagnets coupled to the Josephson junctions. Theoretical studies of the dynamics of such structures need processes of a large number of coupled nonlinear equations. Numerical modeling of hybrid superconductor/magnet nanostructures implies the calculation of both magnetic dynamics and the dynamics of the superconducting phase, which strongly increases their complexity and scale, so it is advisable to use heterogeneous computing systems.

    In the course of studying the physical properties of these objects, it becomes necessary to numerically solve complex systems of nonlinear differential equations, which requires significant time and computational resources.

    The currently existing micromagnetic algorithms and frameworks are based on the finite difference or finite element method and are extremely useful for modeling the dynamics of magnetization on a wide time scale. However, the functionality of existing packages does not allow to fully implement the desired computation scheme.

    The aim of the research is to develop a unified environment for modeling hybrid superconductor/magnet nanostructures, providing access to solvers and developed algorithms, and based on a heterogeneous computing paradigm that allows research of superconducting elements in nanoscale structures with magnets and hybrid quantum materials. In this paper, we investigate resonant phenomena in the nanomagnet system associated with the Josephson junction. Such a system has rich resonant physics. To study the possibility of magnetic reversal depending on the model parameters, it is necessary to solve numerically the Cauchy problem for a system of nonlinear equations. For numerical simulation of hybrid superconductor/magnet nanostructures, a computing environment based on the heterogeneous HybriLIT computing platform is implemented. During the calculations, all the calculation times obtained were averaged over three launches. The results obtained here are of great practical importance and provide the necessary information for evaluating the physical parameters in superconductor/magnet hybrid nanostructures.

  7. Petrov A.P., Podlipskaia O.G., Podlipskii O.K.
    Modeling the dynamics of political positions: network density and the chances of minority
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 785-796

    In some cases, information warfare results in almost whole population accepting one of two contesting points of view and rejecting the other. In other cases, however, the “majority party” gets only a small advantage over the “minority party”. The relevant question is which network characteristics of a population contribute to the minority being able to maintain some significant numbers. Given that some societies are more connected than others, in the sense that they have a higher density of social ties, this question is specified as follows: how does the density of social ties affect the chances of a minority to maintain a significant number? Does a higher density contribute to a landslide victory of majority, or to resistance of minority? To address this issue, we consider information warfare between two parties, called the Left and the Right, in the population, which is represented as a network, the nodes of which are individuals, and the connections correspond to their acquaintance and describe mutual influence. At each of the discrete points in time, each individual decides which party to support based on their attitude, i. e. predisposition to the Left or Right party and taking into account the influence of his network ties. The influence means here that each tie sends a cue with a certain probability to the individual in question in favor of the party that themselves currently support. If the tie switches their party affiliation, they begin to agitate the individual in question for their “new” party. Such processes create dynamics, i. e. the process of changing the partisanship of individuals. The duration of the warfare is exogenously set, with the final time point roughly associated with the election day. The described model is numerically implemented on a scale-free network. Numerical experiments have been carried out for various values of network density. Because of the presence of stochastic elements in the model, 200 runs were conducted for each density value, for each of which the final number of supporters of each of the parties was calculated. It is found that with higher density, the chances increase that the winner will cover almost the entire population. Conversely, low network density contributes to the chances of a minority to maintain significant numbers.

  8. Belyaeva A.V.
    Comparing the effectiveness of computer mass appraisal methods
    Computer Research and Modeling, 2015, v. 7, no. 1, pp. 185-196

    Location-based models — one of areas of CAMA (computer-assisted mass apriasal) building. When taking into account the location of the object using spatial autoregressive models structure of models (type of spatial autocorrelation, choice of “nearest neighbors”) cannot always be determined before its construction. Moreover, in practice there are situations where more efficient methods are taking into account different rates depending on the type of the object from its location. In this regard there are important issues in spatial methods area:

    – fields of methods efficacy;

    – sensitivity of the methods on the choice of the type of spatial model and on the selected number of nearest neighbors.

    This article presents a methodology for assessing the effectiveness of computer evaluation of real estate objects. There are results of approbation on methods based on location information of the objects.

    Views (last year): 2.
  9. Shumov V.V., Korepanov V.O.
    Mathematical models of combat and military operations
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 217-242

    Simulation of combat and military operations is the most important scientific and practical task aimed at providing the command of quantitative bases for decision-making. The first models of combat were developed during the First World War (M. Osipov, F. Lanchester), and now they are widely used in connection with the massive introduction of automation tools. At the same time, the models of combat and war do not fully take into account the moral potentials of the parties to the conflict, which motivates and motivates the further development of models of battle and war. A probabilistic model of combat is considered, in which the parameter of combat superiority is determined through the parameter of moral (the ratio of the percentages of the losses sustained by the parties) and the parameter of technological superiority. To assess the latter, the following is taken into account: command experience (ability to organize coordinated actions), reconnaissance, fire and maneuverability capabilities of the parties and operational (combat) support capabilities. A game-based offensive-defense model has been developed, taking into account the actions of the first and second echelons (reserves) of the parties. The target function of the attackers in the model is the product of the probability of a breakthrough by the first echelon of one of the defense points by the probability of the second echelon of the counterattack repelling the reserve of the defenders. Solved the private task of managing the breakthrough of defense points and found the optimal distribution of combat units between the trains. The share of troops allocated by the parties to the second echelon (reserve) increases with an increase in the value of the aggregate combat superiority parameter of those advancing and decreases with an increase in the value of the combat superiority parameter when repelling a counterattack. When planning a battle (battles, operations) and the distribution of its troops between echelons, it is important to know not the exact number of enemy troops, but their capabilities and capabilities, as well as the degree of preparedness of the defense, which does not contradict the experience of warfare. Depending on the conditions of the situation, the goal of an offensive may be to defeat the enemy, quickly capture an important area in the depth of the enemy’s defense, minimize their losses, etc. For scaling the offensive-defense model for targets, the dependencies of the losses and the onset rate on the initial ratio of the combat potentials of the parties were found. The influence of social costs on the course and outcome of wars is taken into account. A theoretical explanation is given of a loss in a military company with a technologically weak adversary and with a goal of war that is unclear to society. To account for the influence of psychological operations and information wars on the moral potential of individuals, a model of social and information influence was used.

  10. Matveev A.V.
    Modeling the kinetics of radiopharmaceuticals with iodine isotopes in nuclear medicine problems
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 883-905

    Radiopharmaceuticals with iodine radioisotopes are now widely used in imaging and non-imaging methods of nuclear medicine. When evaluating the results of radionuclide studies of the structural and functional state of organs and tissues, parallel modeling of the kinetics of radiopharmaceuticals in the body plays an important role. The complexity of such modeling lies in two opposite aspects. On the one hand, excessive simplification of the anatomical and physiological characteristics of the organism when splitting it to the compartments that may result in the loss or distortion of important clinical diagnosis information, on the other – excessive, taking into account all possible interdependencies of the functioning of the organs and systems that, on the contrary, will lead to excess amount of absolutely useless for clinical interpretation of the data or the mathematical model becomes even more intractable. Our work develops a unified approach to the construction of mathematical models of the kinetics of radiopharmaceuticals with iodine isotopes in the human body during diagnostic and therapeutic procedures of nuclear medicine. Based on this approach, three- and four-compartment pharmacokinetic models were developed and corresponding calculation programs were created in the C++ programming language for processing and evaluating the results of radionuclide diagnostics and therapy. Various methods for identifying model parameters based on quantitative data from radionuclide studies of the functional state of vital organs are proposed. The results of pharmacokinetic modeling for radionuclide diagnostics of the liver, kidney, and thyroid using iodine-containing radiopharmaceuticals are presented and analyzed. Using clinical and diagnostic data, individual pharmacokinetic parameters of transport of different radiopharmaceuticals in the body (transport constants, half-life periods, maximum activity in the organ and the time of its achievement) were determined. It is shown that the pharmacokinetic characteristics for each patient are strictly individual and cannot be described by averaged kinetic parameters. Within the framework of three pharmacokinetic models, “Activity–time” relationships were obtained and analyzed for different organs and tissues, including for tissues in which the activity of a radiopharmaceutical is impossible or difficult to measure by clinical methods. Also discussed are the features and the results of simulation and dosimetric planning of radioiodine therapy of the thyroid gland. It is shown that the values of absorbed radiation doses are very sensitive to the kinetic parameters of the compartment model. Therefore, special attention should be paid to obtaining accurate quantitative data from ultrasound and thyroid radiometry and identifying simulation parameters based on them. The work is based on the principles and methods of pharmacokinetics. For the numerical solution of systems of differential equations of the pharmacokinetic models we used Runge–Kutta methods and Rosenbrock method. The Hooke–Jeeves method was used to find the minimum of a function of several variables when identifying modeling parameters.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"