All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Quantile shape measures for heavy-tailed distributions
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1041-1077Currently, journal papers contain numerous examples of the use of heavy-tailed distributions for applied research on various complex systems. Models of extreme data are usually limited to a small set of distribution shapes that in this field of applied research historically been used. It is possible to increase the composition of the set of probability distributions shapes through comparing the measures of the distribution shapes and choosing the most suitable implementations. The example of a beta distribution of the second kind shown that the lack of definability of the moments of heavy-tailed implementations of the beta family of distributions limits the applicability of the existing classical methods of moments for studying the distributions shapes when are characterized heavy tails. For this reason, the development of new methods for comparing distributions based on quantile shape measures free from the restrictions on the shape parameters remains relevant study the possibility of constructing a space of quantile measures of shapes for comparing distributions with heavy tails. The operation purpose consists in computer research of creation possibility of space of the quantile’s measures for the comparing of distributions property with heavy tails. On the basis of computer simulation there the distributions implementations in measures space of shapes were been shown. Mapping distributions in space only of the parametrical measures of shapes has shown that the imposition of regions for heavy tails distribution made impossible compare the shape of distributions belonging to different type in the space of quantile measures of skewness and kurtosis. It is well known that shape information measures such as entropy and entropy uncertainty interval contain additional information about the shape measure of heavy-tailed distributions. In this paper, a quantile entropy coefficient is proposed as an additional independent measure of shape, which is based on the ratio of entropy and quantile uncertainty intervals. Also estimates of quantile entropy coefficients are obtained for a number of well-known heavy-tailed distributions. The possibility of comparing the distributions shapes with realizations of the beta distribution of the second kind is illustrated by the example of the lognormal distribution and the Pareto distribution. Due to mapping the position of stable distributions in the three-dimensional space of quantile measures of shapes estimate made it possible the shape parameters to of the beta distribution of the second kind, for which shape is closest to the Lévy shape. From the paper material it follows that the display of distributions in the three-dimensional space of quantile measures of the forms of skewness, kurtosis and entropy coefficient significantly expands the possibility of comparing the forms for distributions with heavy tails.
-
Computational treatment of natural language text for intent detection
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1539-1554Intent detection plays a crucial role in task-oriented conversational systems. To understand the user’s goal, the system relies on its intent detector to classify the user’s utterance, which may be expressed in different forms of natural language, into intent classes. However, lack of data, and the efficacy of intent detection systems has been hindered by the fact that the user’s intent text is typically characterized by short, general sentences and colloquial expressions. The process of algorithmically determining user intent from a given statement is known as intent detection. The goal of this study is to develop an intent detection model that will accurately classify and detect user intent. The model calculates the similarity score of the three models used to determine their similarities. The proposed model uses Contextual Semantic Search (CSS) capabilities for semantic search, Latent Dirichlet Allocation (LDA) for topic modeling, the Bidirectional Encoder Representations from Transformers (BERT) semantic matching technique, and the combination of LDA and BERT for text classification and detection. The dataset acquired is from the broad twitter corpus (BTC) and comprises various meta data. To prepare the data for analysis, a pre-processing step was applied. A sample of 1432 instances were selected out of the 5000 available datasets because manual annotation is required and could be time-consuming. To compare the performance of the model with the existing model, the similarity scores, precision, recall, f1 score, and accuracy were computed. The results revealed that LDA-BERT achieved an accuracy of 95.88% for intent detection, BERT with an accuracy of 93.84%, and LDA with an accuracy of 92.23%. This shows that LDA-BERT performs better than other models. It is hoped that the novel model will aid in ensuring information security and social media intelligence. For future work, an unsupervised LDA-BERT without any labeled data can be studied with the model.
-
The adaptive Gaussian receptive fields for spiking encoding of numeric variables
Computer Research and Modeling, 2025, v. 17, no. 3, pp. 389-400Conversion of numeric data to the spiking form and information losses in this process are serious problems limiting usage of spiking neural networks in applied informational systems. While physical values are represented by numbers, internal representation of information inside spiking neural networks is based on spikes — elementary objects emitted and processed by neurons. This problem is especially hard in the reinforcement learning applications where an agent should learn to behave in the dynamic real world because beside the accuracy of the encoding method, its dynamic characteristics should be considered as well. The encoding algorithm based on the Gaussian receptive fields (GRF) is frequently used. In this method, one numeric variable fed to the network is represented by spike streams emitted by a certain set of network input nodes. The spike frequency in each stream is determined by proximity of the current variable value to the center of the receptive field corresponding to the given input node. In the standard GRF algorithm, the receptive field centers are placed equidistantly. However, it is inefficient in the case of very uneven distribution of the variable encoded. In the present paper, an improved version of this method is proposed which is based on adaptive selection of the Gaussian centers and spike stream frequencies. This improved GRF algorithm is compared with its standard version in terms of amount of information lost in the coding process and of accuracy of classification models built on spike-encoded data. The fraction of information retained in the process of the standard and adaptive GRF encoding is estimated using the direct and reverse encoding procedures applied to a large sample from the triangular probability distribution and counting coinciding bits in the original and restored samples. The comparison based on classification was performed on a task of evaluation of current state in reinforcement learning. For this purpose, the classification models were created by machine learning algorithms of very different nature — nearest neighbors algorithm, random forest and multi-layer perceptron. Superiority of our approach is demonstrated on all these tests.
-
A general approach to constructing gradient methods for parameter identification based on modified weighted Gram – Schmidt orthogonalization and information-type discrete filtering algorithms
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 761-782The paper considers the problem of parameter identification of discrete-time linear stochastic systems in the state space with additive and multiplicative noise. It is assumed that the state and measurements equations of a discrete-time linear stochastic system depend on an unknown parameter to be identified.
A new approach to the construction of gradient parameter identification methods in the class of discrete-time linear stochastic systems with additive and multiplicative noise is presented, based on the application of modified weighted Gram – Schmidt orthogonalization (MWGS) and the discrete-time information-type filtering algorithms.
The main theoretical results of this research include: 1) a new identification criterion in terms of an extended information filter; 2) a new algorithm for calculating derivatives with respect to an uncertainty parameter in a discrete-time linear stochastic system based on an extended information LD filter using the direct procedure of modified weighted Gram – Schmidt orthogonalization; and 3) a new method for calculating the gradient of identification criteria using a “differentiated” extended information LD filter.
The advantages of this approach are that it uses MWGS orthogonalization which is numerically stable against machine roundoff errors, and it forms the basis of all the developed methods and algorithms. The information LD-filter maintains the symmetry and positive definiteness of the information matrices. The algorithms have an array structure that is convenient for computer implementation.
All the developed algorithms were implemented in MATLAB. A series of numerical experiments were carried out. The results obtained demonstrated the operability of the proposed approach, using the example of solving the problem of parameter identification for a mathematical model of a complex mechanical system.
The results can be used to develop methods for identifying parameters in mathematical models that are represented in state space by discrete-time linear stochastic systems with additive and multiplicative noise.
-
Subsystem “Developer” as a part of the Retail Payment System
Computer Research and Modeling, 2013, v. 5, no. 1, pp. 25-36In this paper we consider one of the core subsystems of the retail payment system named “Developer”. The Queuing System for modeling this subsystem was developed and information about it is provided. The task for the assignment problem was set up and solved (the modification of the Hungarian algorithm was used). Information about Agent Based Model for subsystem “Developer” and the results of the simulation experiments are given.
-
Web-based interactive registry of the geosensors
Computer Research and Modeling, 2016, v. 8, no. 4, pp. 621-632Views (last year): 5.Selection and correct applying of the geosensor — the instrument of mineral geothermobarometry is challenging because of the wide variety of existing geosensors on the one hand and the availability of specific requirements for their use on the other. In this paper, organization of the geosensors within the computer system called interactive registry was proposed for reducing the labor intensity of the geosensors usage and providing information support for them. The article provides a formal description of the thermodynamic geosensor, as a function of the minerals composition and independent parameters, as well as the basic steps of pressure and temperature estimation which are common for all geosensors: conversion to the formula units, calculation of the additional parameters and the calculation of the required values. Existing collections of geosensors made as standalone applications, or as spreadsheets was examined for advantages and disadvantages of these approaches. Additional information necessary to use the geosensor was described: paragenesis, accuracy and range of parameter values, reference and others. Implementation of the geosensors registry as the webbased application which uses wiki technology was proposed. Usage of the wiki technology allows to effectively organize not so well formalized additional information about the geosensor and it’s algorithm which had written in a programming language into a single information system. For information organization links, namespaces and wiki markup was used. The article discusses the implementation of the applications on the top of DokuWiki system with specially designed RESTful server, allowing users to apply the geosensors from the registry to their own data. Programming language R uses as a geosensors description language. RServe server uses for calculations. The unittest for each geosensor allows to check the correctness of it’s implementation. The user interface of the application was developed as DokuWiki plug-in. The example of usage was given. In the article conclusion, the questions of the application security, performance and scaling was discussed.
-
Bayesian localization for autonomous vehicle using sensor fusion and traffic signs
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 295-303Views (last year): 22.The localization of a vehicle is an important task in the field of intelligent transportation systems. It is well known that sensor fusion helps to create more robust and accurate systems for autonomous vehicles. Standard approaches, like extended Kalman Filter or Particle Filter, are inefficient in case of highly non-linear data or have high computational cost, which complicates using them in embedded systems. Significant increase of precision, especially in case when GPS (Global Positioning System) is unavailable, may be achieved by using landmarks with known location — such as traffic signs, traffic lights, or SLAM (Simultaneous Localization and Mapping) features. However, this approach may be inapplicable if a priori locations are unknown or not accurate enough. We suggest a new approach for refining coordinates of a vehicle by using landmarks, such as traffic signs. Core part of the suggested system is the Bayesian framework, which refines vehicle location using external data about the previous traffic signs detections, collected with crowdsourcing. This paper presents an approach that combines trajectories built using global coordinates from GPS and relative coordinates from Inertial Measurement Unit (IMU) to produce a vehicle's trajectory in an unknown environment. In addition, we collected a new dataset, including from smartphone GPS and IMU sensors, video feed from windshield camera, which were recorded during 4 car rides on the same route. Also, we collected precise location data from Real Time Kinematic Global Navigation Satellite System (RTK-GNSS) device, which can be used for validation. This RTK-GNSS system was used to collect precise data about the traffic signs locations on the route as well. The results show that the Bayesian approach helps with the trajectory correction and gives better estimations with the increase of the amount of the prior information. The suggested method is efficient and requires, apart from the GPS/IMU measurements, only information about the vehicle locations during previous traffic signs detections.
-
Modeling the structure of a complex system based on estimation of the measure of interaction of subsystems
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 707-719The using of determining the measure of interaction between channels when choosing the configuration structure of a control system for complex dynamic objects is considered in the work. The main methods for determining the measure of interaction between subsystems of complex control systems based on the methods RGA (Relative Gain Array), Dynamic RGA, HIIA (Hankel Interaction Index Array), PM (Participation matrix) are presented. When choosing a control configuration, simple configurations are preferable, as they are simple in design, maintenance and more resistant to failures. However, complex configurations provide higher performance control systems. Processes in large dynamic objects are characterized by a high degree of interaction between process variables. For the design of the control structure interaction measures are used, namely, the selection of the control structure and the decision on the configuration of the controller. The choice of control structure is to determine which dynamic connections should be used to design the controller. When a structure is selected, connections can be used to configure the controller. For large systems, it is proposed to pre-group the components of the vectors of input and output signals of the actuators and sensitive elements into sets in which the number of variables decreases significantly in order to select a control structure. A quantitative estimation of the decentralization of the control system based on minimizing the sum of the off-diagonal elements of the PM matrix is given. An example of estimation the measure of interaction between components of strong coupled subsystems and the measure of interaction between components of weak coupled subsystems is given. A quantitative estimation is given of neglecting the interaction of components of weak coupled subsystems. The construction of a weighted graph for visualizing the interaction of the subsystems of a complex system is considered. A method for the formation of the controllability gramian on the vector of output signals that is invariant to state vector transformations is proposed in the paper. An example of the decomposition of the stabilization system of the components of the flying vehicle angular velocity vector is given. The estimation of measures of the mutual influence of processes in the channels of control systems makes it possible to increase the reliability of the systems when accounting for the use of analytical redundancy of information from various devices, which reduces the mass and energy consumption. Methods for assessing measures of the interaction of processes in subsystems of control systems can be used in the design of complex systems, for example, motion control systems, orientation and stabilization systems of vehicles.
-
Improvement of the paired comparison method for implementation in computer programs used in assessment of technical systems’ quality
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1125-1135The article describes an improved paired comparison method, which systematizes in tables the rules of logical conclusions and formulas of checking indices for comparison of technical systems. To achieve this goal, the authors formulate rational rules of logical conclusions in making a paired comparison of the systems. In addition, for the purpose of consistency check of the results of the assessment, the authors introduce parameters such as «the number of scores gained by one system» and «systems’ quality index»; moreover, they design corresponding calculation formulas. For the purposes of practical application of this method to design computer programs, the authors propose to use formalized variants of interconnected tables: a table for processing and systematization of expert information, a table of possible logical conclusions based on the results of comparison of a set number of technical systems and a table of check values in the paired comparison method used in quality assessment of a definite number of technical systems. These tables allow one to organize procedures of the information processing in a more rational way and to predominantly exclude the influence of mistakes on the results of quality assessment of technical systems at the stage of data input. The main positive effect from the implementation of the paired comparison method is observed in a considerable reduction of time and resources needed to organize experts work, process expert information, and to prepare and conduct distant interviews with experts (on the Internet or a local computer network of an organization). This effect is achieved by a rational use of input data of the quality of the systems to be assessed. The proposed method is applied to computer programs used in assessing the effectiveness and stability of large technical systems.
-
Lidar and camera data fusion in self-driving cars
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1239-1253Sensor fusion is one of the important solutions for the perception problem in self-driving cars, where the main aim is to enhance the perception of the system without losing real-time performance. Therefore, it is a trade-off problem and its often observed that most models that have a high environment perception cannot perform in a real-time manner. Our article is concerned with camera and Lidar data fusion for better environment perception in self-driving cars, considering 3 main classes which are cars, cyclists and pedestrians. We fuse output from the 3D detector model that takes its input from Lidar as well as the output from the 2D detector that take its input from the camera, to give better perception output than any of them separately, ensuring that it is able to work in real-time. We addressed our problem using a 3D detector model (Complex-Yolov3) and a 2D detector model (Yolo-v3), wherein we applied the image-based fusion method that could make a fusion between Lidar and camera information with a fast and efficient late fusion technique that is discussed in detail in this article. We used the mean average precision (mAP) metric in order to evaluate our object detection model and to compare the proposed approach with them as well. At the end, we showed the results on the KITTI dataset as well as our real hardware setup, which consists of Lidar velodyne 16 and Leopard USB cameras. We used Python to develop our algorithm and then validated it on the KITTI dataset. We used ros2 along with C++ to verify the algorithm on our dataset obtained from our hardware configurations which proved that our proposed approach could give good results and work efficiently in practical situations in a real-time manner.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




