All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Modeling of the effective environment in the Republic of Tatarstan using transport data
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 395-404Automated urban traffic monitoring systems are widely used to solve various tasks in intelligent transport systems of different regions. They include video enforcement, video surveillance, traffic management system, etc. Effective traffic management and rapid response to traffic incidents require continuous monitoring and analysis of information from these complexes, as well as time series forecasting for further anomaly detection in traffic flow. To increase the forecasting quality, data fusion from different sources is needed. It will reduce the forecasting error, related to possible incorrect values and data gaps. We implemented the approach for short-term and middle-term forecasting of traffic flow (5, 10, 15 min) based on data fusion from video enforcement and video surveillance systems. We made forecasting using different recurrent neural network architectures: LSTM, GRU, and bidirectional LSTM with one and two layers. We investigated the forecasting quality of bidirectional LSTM with 64 and 128 neurons in hidden layers. The input window size (1, 4, 12, 24, 48) was investigated. The RMSE value was used as a forecasting error. We got minimum RMSE = 0.032405 for basic LSTM with 64 neurons in the hidden layer and window size = 24.
-
The two geometric parameters influence study on the hydrostatic problem solution accuracy by the SPH method
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 979-992The two significant geometric parameters are proposed that affect the physical quantities interpolation in the smoothed particle hydrodynamics method (SPH). They are: the smoothing coefficient which the particle size and the smoothing radius are connecting and the volume coefficient which determine correctly the particle mass for a given particles distribution in the medium.
In paper proposes a technique for these parameters influence assessing on the SPH method interpolations accuracy when the hydrostatic problem solving. The analytical functions of the relative error for the density and pressure gradient in the medium are introduced for the accuracy estimate. The relative error functions are dependent on the smoothing factor and the volume factor. Designating a specific interpolation form in SPH method allows the differential form of the relative error functions to the algebraic polynomial form converting. The root of this polynomial gives the smoothing coefficient values that provide the minimum interpolation error for an assigned volume coefficient.
In this work, the derivation and analysis of density and pressure gradient relative errors functions on a sample of popular nuclei with different smoothing radius was carried out. There is no common the smoothing coefficient value for all the considered kernels that provides the minimum error for both SPH interpolations. The nuclei representatives with different smoothing radius are identified which make it possible the smallest errors of SPH interpolations to provide when the hydrostatic problem solving. As well, certain kernels with different smoothing radius was determined which correct interpolation do not allow provide when the hydrostatic problem solving by the SPH method.
-
The analysis of respiratory reactions of the person in the conditions of the changed gas environment on mathematical model
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 281-296Views (last year): 5.The aim of the work was to study and develop methods of forecasting the dynamics of the human respiratory reactions, based on mathematical modeling. To achieve this goal have been set and solved the following tasks: developed and justified the overall structure and formalized description of the model Respiro-reflex system; built and implemented the algorithm in software models of gas exchange of the body; computational experiments and checking the adequacy of the model-based Lite-ture data and our own experimental studies.
In this embodiment, a new comprehensive model entered partial model modified version of physicochemical properties and blood acid-base balance. In developing the model as the basis of a formalized description was based on the concept of separation of physiologically-fi system of regulation on active and passive subsystems regulation. Development of the model was carried out in stages. Integrated model of gas exchange consisted of the following special models: basic biophysical models of gas exchange system; model physicochemical properties and blood acid-base balance; passive mechanisms of gas exchange model developed on the basis of mass balance equations Grodinza F.; chemical regulation model developed on the basis of a multifactor model D. Gray.
For a software implementation of the model, calculations were made in MatLab programming environment. To solve the equations of the method of Runge–Kutta–Fehlberga. It is assumed that the model will be presented in the form of a computer research program, which allows implements vat various hypotheses about the mechanism of the observed processes. Calculate the expected value of the basic indicators of gas exchange under giperkap Britain and hypoxia. The results of calculations as the nature of, and quantity is good enough co-agree with the data obtained in the studies on the testers. The audit on Adek-vatnost confirmed that the error calculation is within error of copper-to-biological experiments. The model can be used in the theoretical prediction of the dynamics of the respiratory reactions of the human body in a changed atmosphere.
-
Numerical Solution of Linear and Higher-order Delay Differential Equations using the Coded Differential Transform Method
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1091-1099The aim of the paper is to obtain a numerical solution for linear and higher-order delay differential equations (DDEs) using the coded differential transform method (CDTM). The CDTM is developed and applied to delay problems to show the efficiency of the proposed method. The coded differential transform method is a combination of the differential transform method and Mathematica software. We construct recursive relations for a few delay problems, which results in simultaneous equations, and solve them to obtain various series solution terms using the coded differential transform method. The numerical solution obtained by CDTM is compared with an exact solution. Numerical results and error analysis are presented for delay differential equations to show that the proposed method is suitable for solving delay differential equations. It is established that the delay differential equations under discussion are solvable in a specific domain. The error between the CDTM solution and the exact solution becomes very small if more terms are included in the series solution. The coded differential transform method reduces complex calculations, avoids discretization, linearization, and saves calculation time. In addition, it is easy to implement and robust. Error analysis shows that CDTM is consistent and converges fast. We obtain more accurate results using the coded differential transform method as compared to other methods.
-
Mathematical model of the biometric iris recognition system
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 629-639Automatic recognition of personal identity by biometric features is based on unique peculiarities or characteristics of people. Biometric identification process consist in making of reference templates and comparison with new input data. Iris pattern recognition algorithms presents high accuracy and low identification errors percent on practice. Iris pattern advantages over other biometric features are determined by its high degree of freedom (nearly 249), excessive density of unique features and constancy. High recognition reliability level is very important because it provides search in big databases. Unlike one-to-one check mode that is applicable only to small calculation count it allows to work in one-to-many identification mode. Every biometric identification system appears to be probabilistic and qualitative characteristics description utilizes such parameters as: recognition accuracy, false acceptance rate and false rejection rate. These characteristics allows to compare identity recognition methods and asses the system performance under any circumstances. This article explains the mathematical model of iris pattern biometric identification and its characteristics. Besides, there are analyzed results of comparison of model and real recognition process. To make such analysis there was carried out the review of existing iris pattern recognition methods based on different unique features vector. The Python-based software package is described below. It builds-up probabilistic distributions and generates large test data sets. Such data sets can be also used to educate the identification decision making neural network. Furthermore, synergy algorithm of several iris pattern identification methods was suggested to increase qualitative characteristics of system in comparison with the use of each method separately.
-
Technology for collecting initial data for constructing models for assessing the functional state of a human by pupil's response to illumination changes in the solution of some problems of transport safety
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 417-427This article solves the problem of developing a technology for collecting initial data for building models for assessing the functional state of a person. This condition is assessed by the pupil response of a person to a change in illumination based on the pupillometry method. This method involves the collection and analysis of initial data (pupillograms), presented in the form of time series characterizing the dynamics of changes in the human pupils to a light impulse effect. The drawbacks of the traditional approach to the collection of initial data using the methods of computer vision and smoothing of time series are analyzed. Attention is focused on the importance of the quality of the initial data for the construction of adequate mathematical models. The need for manual marking of the iris and pupil circles is updated to improve the accuracy and quality of the initial data. The stages of the proposed technology for collecting initial data are described. An example of the obtained pupillogram is given, which has a smooth shape and does not contain outliers, noise, anomalies and missing values. Based on the presented technology, a software and hardware complex has been developed, which is a collection of special software with two main modules, and hardware implemented on the basis of a Raspberry Pi 4 Model B microcomputer, with peripheral equipment that implements the specified functionality. To evaluate the effectiveness of the developed technology, models of a single-layer perspetron and a collective of neural networks are used, for the construction of which the initial data on the functional state of intoxication of a person were used. The studies have shown that the use of manual marking of the initial data (in comparison with automatic methods of computer vision) leads to a decrease in the number of errors of the 1st and 2nd years of the kind and, accordingly, to an increase in the accuracy of assessing the functional state of a person. Thus, the presented technology for collecting initial data can be effectively used to build adequate models for assessing the functional state of a person by pupillary response to changes in illumination. The use of such models is relevant in solving individual problems of ensuring transport security, in particular, monitoring the functional state of drivers.
-
Mathematicity of physics is surprising, but it enables us to understand the laws of nature through the analysis of mathematical structures describing it. This concerns, however, only physics. The degree of the mathematization of biology is low, and attempts to mathematize it are limited to the application of mathematical methods used for the description of physical systems. When doing so, we are likely to commit an error of attributing to biological systems features that they do not have. Some argue that biology does need new mathematical methods conforming to its needs, and not known from physics. However, because of a specific complexity of biological systems, we should speak of their algorithmicity, rather than of their mathematicity. As an example of algorithmic approach one can indicate so called individual-based models used in ecology to describe population dynamics or fractal models applied to describe geometrical complexity of such biological structures as trees.
-
A neural network model for traffic signs recognition in intelligent transport systems
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 429-435This work analyzes the problem of traffic signs recognition in intelligent transport systems. The basic concepts of computer vision and image recognition tasks are considered. The most effective approach for solving the problem of analyzing and recognizing images now is the neural network method. Among all kinds of neural networks, the convolutional neural network has proven itself best. Activation functions such as Relu and SoftMax are used to solve the classification problem when recognizing traffic signs. This article proposes a technology for recognizing traffic signs. The choice of an approach for solving the problem based on a convolutional neural network due to the ability to effectively solve the problem of identifying essential features and classification. The initial data for the neural network model were prepared and a training sample was formed. The Google Colaboratory cloud service with the external libraries for deep learning TensorFlow and Keras was used as a platform for the intelligent system development. The convolutional part of the network is designed to highlight characteristic features in the image. The first layer includes 512 neurons with the Relu activation function. Then there is the Dropout layer, which is used to reduce the effect of overfitting the network. The output fully connected layer includes four neurons, which corresponds to the problem of recognizing four types of traffic signs. An intelligent traffic sign recognition system has been developed and tested. The used convolutional neural network included four stages of convolution and subsampling. Evaluation of the efficiency of the traffic sign recognition system using the three-block cross-validation method showed that the error of the neural network model is minimal, therefore, in most cases, new images will be recognized correctly. In addition, the model has no errors of the first kind, and the error of the second kind has a low value and only when the input image is very noisy.
-
The task of trajectory calculation with the homogenous distribution of results
Computer Research and Modeling, 2014, v. 6, no. 5, pp. 803-828Citations: 3 (RSCI).We consider a new set of tests which assigns to detection of human capability for parallel calculation. The new tests support the homogenous statistical distribution of results in distinction to the tests discussed in our previous works. This feature simplifies the analysis of test results and decreases the estimate of statistical error. The new experimental data is close to results obtained in previous experiments.
-
Application of simplified implicit Euler method for electrophysiological models
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 845-864A simplified implicit Euler method was analyzed as an alternative to the explicit Euler method, which is a commonly used method in numerical modeling in electrophysiology. The majority of electrophysiological models are quite stiff, since the dynamics they describe includes a wide spectrum of time scales: a fast depolarization, that lasts milliseconds, precedes a considerably slow repolarization, with both being the fractions of the action potential observed in excitable cells. In this work we estimate stiffness by a formula that does not require calculation of eigenvalues of the Jacobian matrix of the studied ODEs. The efficiency of the numerical methods was compared on the case of typical representatives of detailed and conceptual type models of excitable cells: Hodgkin–Huxley model of a neuron and Aliev–Panfilov model of a cardiomyocyte. The comparison of the efficiency of the numerical methods was carried out via norms that were widely used in biomedical applications. The stiffness ratio’s impact on the speedup of simplified implicit method was studied: a real gain in speed was obtained for the Hodgkin–Huxley model. The benefits of the usage of simple and high-order methods for electrophysiological models are discussed along with the discussion of one method’s stability issues. The reasons for using simplified instead of high-order methods during practical simulations were discussed in the corresponding section. We calculated higher order derivatives of the solutions of Hodgkin-Huxley model with various stiffness ratios; their maximum absolute values appeared to be quite large. A numerical method’s approximation constant’s formula contains the latter and hence ruins the effect of the other term (a small factor which depends on the order of approximation). This leads to the large value of global error. We committed a qualitative stability analysis of the explicit Euler method and were able to estimate the model’s parameters influence on the border of the region of absolute stability. The latter is used when setting the value of the timestep for simulations a priori.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"