Результаты поиска по 'map':
Найдено статей: 40
  1. Osipov A.A., Ostanin M.A., Klimchik A.S.
    Analysis of mixed reality cross-device global localization algorithms based on point cloud registration
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 657-674

    State-of-the-art localization and mapping approaches for augmented (AR) and mixed (MR) reality devices are based on the extraction of local features from the camera. Along with this, modern AR/MR devices allow you to build a three-dimensional mesh of the surrounding space. However, the existing methods do not solve the problem of global device co-localization due to the use of different methods for extracting computer vision features. Using a space map from a 3D mesh, we can solve the problem of collaborative global localization of AR/MR devices. This approach is independent of the type of feature descriptors and localisation and mapping algorithms used onboard the AR/MR device. The mesh can be reduced to a point cloud, which consists of only the vertices of the mesh. We propose an approach for collaborative localization of AR/MR devices using point clouds that are independent of algorithms onboard the device. We have analyzed various point cloud registration algorithms and discussed their limitations for the problem of global co-localization of AR/MR devices indoors.

  2. Kalitin K.Y., Nevzorov A.A., Spasov A.A., Mukha O.Y.
    Deep learning analysis of intracranial EEG for recognizing drug effects and mechanisms of action
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 755-772

    Predicting novel drug properties is fundamental to polypharmacology, repositioning, and the study of biologically active substances during the preclinical phase. The use of machine learning, including deep learning methods, for the identification of drug – target interactions has gained increasing popularity in recent years.

    The objective of this study was to develop a method for recognizing psychotropic effects and drug mechanisms of action (drug – target interactions) based on an analysis of the bioelectrical activity of the brain using artificial intelligence technologies.

    Intracranial electroencephalographic (EEG) signals from rats were recorded (4 channels at a sampling frequency of 500 Hz) after the administration of psychotropic drugs (gabapentin, diazepam, carbamazepine, pregabalin, eslicarbazepine, phenazepam, arecoline, pentylenetetrazole, picrotoxin, pilocarpine, chloral hydrate). The signals were divided into 2-second epochs, then converted into $2000\times 4$ images and input into an autoencoder. The output of the bottleneck layer was subjected to classification and clustering using t-SNE, and then the distances between resulting clusters were calculated. As an alternative, an approach based on feature extraction with dimensionality reduction using principal component analysis and kernel support vector machine (kSVM) classification was used. Models were validated using 5-fold cross-validation.

    The classification accuracy obtained for 11 drugs during cross-validation was $0.580 \pm 0.021$, which is significantly higher than the accuracy of the random classifier $(0.091 \pm 0.045, p < 0.0001)$ and the kSVM $(0.441 \pm 0.035, p < 0.05)$. t-SNE maps were generated from the bottleneck parameters of intracranial EEG signals. The relative proximity of the signal clusters in the parametric space was assessed.

    The present study introduces an original method for biopotential-mediated prediction of effects and mechanism of action (drug – target interaction). This method employs convolutional neural networks in conjunction with a modified selective parameter reduction algorithm. Post-treatment EEGs were compressed into a unified parameter space. Using a neural network classifier and clustering, we were able to recognize the patterns of neuronal response to the administration of various psychotropic drugs.

  3. Chuvilin K.V.
    An efficient algorithm for ${\mathrm{\LaTeX}}$ documents comparing
    Computer Research and Modeling, 2015, v. 7, no. 2, pp. 329-345

    The problem is constructing the differences that arise on ${\mathrm{\LaTeX}}$ documents editing. Each document is represented as a parse tree whose nodes are called tokens. The smallest possible text representation of the document that does not change the syntax tree is constructed. All of the text is splitted into fragments whose boundaries correspond to tokens. A map of the initial text fragment sequence to the similar sequence of the edited document corresponding to the minimum distance is built with Hirschberg algorithm A map of text characters corresponding to the text fragment sequences map is cunstructed. Tokens, that chars are all deleted, or all inserted, or all not changed, are selected in the parse trees. The map for the trees formed with other tokens is built using Zhang–Shasha algorithm.

    Views (last year): 2. Citations: 2 (RSCI).
  4. Sukhinov A.I., Chistyakov A.E., Semenyakina A.A., Nikitina A.V.
    Numerical modeling of ecologic situation of the Azov Sea with using schemes of increased order of accuracy on multiprocessor computer system
    Computer Research and Modeling, 2016, v. 8, no. 1, pp. 151-168

    The article covered results of three-dimensional modeling of ecologic situation of shallow water on the example of the Azov Sea with using schemes of increased order of accuracy on multiprocessor computer system of Southern Federal University. Discrete analogs of convective and diffusive transfer operators of the fourth order of accuracy in the case of partial occupancy of cells were constructed and studied. The developed scheme of the high (fourth) order of accuracy were used for solving problems of aquatic ecology and modeling spatial distribution of polluting nutrients, which caused growth of phytoplankton, many species of which are toxic and harmful. The use of schemes of the high order of accuracy are improved the quality of input data and decreased the error in solutions of model tasks of aquatic ecology. Numerical experiments were conducted for the problem of transportation of substances on the basis of the schemes of the second and fourth orders of accuracy. They’re showed that the accuracy was increased in 48.7 times for diffusion-convection problem. The mathematical algorithm was proposed and numerically implemented, which designed to restore the bottom topography of shallow water on the basis of hydrographic data (water depth at individual points or contour level). The map of bottom relief of the Azov Sea was generated with using this algorithm. It’s used to build fields of currents calculated on the basis of hydrodynamic model. The fields of water flow currents were used as input data of the aquatic ecology models. The library of double-layered iterative methods was developed for solving of nine-diagonal difference equations. It occurs in discretization of model tasks of challenges of pollutants concentration, plankton and fish on multiprocessor computer system. It improved the precision of the calculated data and gave the possibility to obtain operational forecasts of changes in ecologic situation of shallow water in short time intervals.

    Views (last year): 4. Citations: 31 (RSCI).
  5. Musaev A.A., Grigoriev D.A.
    Extracting knowledge from text messages: overview and state-of-the-art
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315

    In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.

  6. Almasri A., Tsybulin V.G.
    A dynamic analysis of a prey – predator – superpredator system: a family of equilibria and its destruction
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1601-1615

    The paper investigates the dynamics of a finite-dimensional model describing the interaction of three populations: prey $x(t)$, its consuming predator $y(t)$, and a superpredator $z(t)$ that feeds on both species. Mathematically, the problem is formulated as a system of nonlinear first-order differential equations with the following right-hand side: $[x(1-x)-(y+z)g;\,\eta_1^{}yg-d_1^{}f-\mu_1^{}y;\,\eta_2^{}zg+d_2^{}f-\mu_2^{}z]$, where $\eta_j^{}$, $d_j^{}$, $\mu_j^{}$ ($j=1,\,2$) are positive coefficients. The considered model belongs to the class of cosymmetric dynamical systems under the Lotka\,--\,Volterra functional response $g=x$, $f=yz$, and two parameter constraints: $\mu_2^{}=d_2^{}\left(1+\frac{\mu_1^{}}{d_1^{}}\right)$, $\eta_2^{}=d_2^{}\left(1+\frac{\eta_1^{}}{d_1^{}}\right)$. In this case, a family of equilibria is being of a straight line in phase space. We have analyzed the stability of the equilibria from the family and isolated equilibria. Maps of stationary solutions and limit cycles have been constructed. The breakdown of the family is studied by violating the cosymmetry conditions and using the Holling model $g(x)=\frac x{1+b_1^{}x}$ and the Beddington–DeAngelis model $f(y,\,z)=\frac{yz}{1+b_2^{}y+b_3^{}z}$. To achieve this, the apparatus of Yudovich's theory of cosymmetry is applied, including the computation of cosymmetric defects and selective functions. Through numerical experimentation, invasive scenarios have been analyzed, encompassing the introduction of a superpredator into the predator-prey system, the elimination of the predator, or the superpredator.

  7. Tomonin Y.D., Tominin V.D., Borodich E.D., Kovalev D.A., Dvurechensky P.E., Gasnikov A.V., Chukanov S.V.
    On Accelerated Methods for Saddle-Point Problems with Composite Structure
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 433-467

    We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and dual variables. First, we consider such problems with smooth composite terms, one of which has finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for noncomposite setting. Besides, our algorithms allow one to separate the complexity bounds, i. e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.

  8. Zavodskikh R.K., Efanov N.N.
    Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224

    The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.

  9. Chuvilin K.V.
    The use of syntax trees in order to automate the correction of LaTeX documents
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 871-883

    The problem is to automate the correction of LaTeX documents. Each document is represented as a parse tree. The modified Zhang-Shasha algorithm is used to construct a mapping of tree vertices of the original document to the tree vertices of the edited document, which corresponds to the minimum editing distance. Vertex to vertex maps form the training set, which is used to generate rules for automatic correction. The statistics of the applicability to the edited documents is collected for each rule. It is used for quality assessment and improvement of the rules.

    Citations: 5 (RSCI).
  10. Kamenev G.K., Kamenev I.G.
    Multicriterial metric data analysis in human capital modelling
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1223-1245

    The article describes a model of a human in the informational economy and demonstrates the multicriteria optimizational approach to the metric analysis of model-generated data. The traditional approach using the identification and study involves the model’s identification by time series and its further prediction. However, this is not possible when some variables are not explicitly observed and only some typical borders or population features are known, which is often the case in the social sciences, making some models pure theoretical. To avoid this problem, we propose a method of metric data analysis (MMDA) for identification and study of such models, based on the construction and analysis of the Kolmogorov – Shannon metric nets of the general population in a multidimensional space of social characteristics. Using this method, the coefficients of the model are identified and the features of its phase trajectories are studied. In this paper, we are describing human according to his role in information processing, considering his awareness and cognitive abilities. We construct two lifetime indices of human capital: creative individual (generalizing cognitive abilities) and productive (generalizing the amount of information mastered by a person) and formulate the problem of their multi-criteria (two-criteria) optimization taking into account life expectancy. This approach allows us to identify and economically justify the new requirements for the education system and the information environment of human existence. It is shown that the Pareto-frontier exists in the optimization problem, and its type depends on the mortality rates: at high life expectancy there is one dominant solution, while for lower life expectancy there are different types of Paretofrontier. In particular, the Pareto-principle applies to Russia: a significant increase in the creative human capital of an individual (summarizing his cognitive abilities) is possible due to a small decrease in the creative human capital (summarizing awareness). It is shown that the increase in life expectancy makes competence approach (focused on the development of cognitive abilities) being optimal, while for low life expectancy the knowledge approach is preferable.

Pages: « first previous

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"