All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
The 3rd BRICS Mathematics Conference
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1015-1016 -
Neural network analysis of transportation flows of urban aglomeration using the data from public video cameras
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 305-318Correct modeling of complex dynamics of urban transportation flows requires the collection of large volumes of empirical data to specify types of the modes and their identification. At the same time, setting a large number of observation posts is expensive and technically not always feasible. All this results in insufficient factographic support for the traffic control systems as well as for urban planners with the obvious consequences for the quality of their decisions. As one of the means to provide large-scale data collection at least for the qualitative situation analysis, the wide-area video cameras are used in different situation centers. There they are analyzed by human operators who are responsible for observation and control. Some video cameras provided their videos for common access, which makes them a valuable resource for transportation studies. However, there are significant problems with getting qualitative data from such cameras, which relate to the theory and practice of image processing. This study is devoted to the practical application of certain mainstream neuro-networking technologies for the estimation of essential characteristics of actual transportation flows. The problems arising in processing these data are analyzed, and their solutions are suggested. The convolution neural networks are used for tracking, and the methods for obtaining basic parameters of transportation flows from these observations are studied. The simplified neural networks are used for the preparation of training sets for the deep learning neural network YOLOv4 which is later used for the estimation of speed and density of automobile flows.
-
Convolutional neural networks of YOLO family for mobile computer vision systems
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 615-631The work analyzes known classes of convolutional neural network models and studies selected from them promising models for detecting flying objects in images. Object detection here refers to the detection, localization in space and classification of flying objects. The work conducts a comprehensive study of selected promising convolutional neural network models in order to identify the most effective ones from them for creating mobile real-time computer vision systems. It is shown that the most suitable models for detecting flying objects in images, taking into account the formulated requirements for mobile real-time computer vision systems, are models of the YOLO family, and five models from this family should be considered: YOLOv4, YOLOv4-Tiny, YOLOv4-CSP, YOLOv7 and YOLOv7-Tiny. An appropriate dataset has been developed for training, validation and comprehensive research of these models. Each labeled image of the dataset includes from one to several flying objects of four classes: “bird”, “aircraft-type unmanned aerial vehicle”, “helicopter-type unmanned aerial vehicle”, and “unknown object” (objects in airspace not included in the first three classes). Research has shown that all convolutional neural network models exceed the specified threshold value by the speed of detecting objects in the image, however, only the YOLOv4-CSP and YOLOv7 models partially satisfy the requirements of the accuracy of detection of flying objects. It was shown that most difficult object class to detect is the “bird” class. At the same time, it was revealed that the most effective model is YOLOv7, the YOLOv4-CSP model is in second place. Both models are recommended for use as part of a mobile real-time computer vision system with condition of additional training of these models on increased number of images with objects of the “bird” class so that they satisfy the requirement for the accuracy of detecting flying objects of each four classes.
-
Symmetries of differential equations in computer vision applications
Computer Research and Modeling, 2010, v. 2, no. 4, pp. 369-376Views (last year): 8. Citations: 4 (RSCI).In our work we present generalization of well-known approach for construction of invariant feature vectors of images in computer vision applications. Basic feature of the suggested algorithm is replacement of commonly used Gaussian filter by convolution of image function with Green’s function of evolution operator, which inherits symmetries of this operator. The use of general filtration allows to obtain additional characteristics of invariant feature vectors.
-
Analysis of the possibility of investigation of hydrodynamic responses and landing dynamics of space module impacting water with FlowVision CFD software
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 47-55Views (last year): 10.The results of verification carried out for investigations of hydrodynamic effect on reentry conicalsegmental space vehicle are presented in the paper. The program complex Flow Vision is used for this analysis. The purpose of the study is verification of using Flow Vision program complex for problem solving mentioned above on the base of comparison between calculated and experimental data, obtained on the Apollo landing models and new development reentry spacecraft of manned transporting spaceship designed by RSC Energia. The comparison was carried out through the data of pressure values on spacecraft model surfaces during its water landing and inertia center motion parameters.
The results of study show good agreement between experimental and calculated data of force effects on vehicle construction during water landing and its motion parameters in the water medium. Computer simulation sufficiently well reproduces influence of initial velocities & water entry angles variations on water landing process.
Using of computer simulation provides simultaneous acquisition of all data information needed for investigation of water landing peculiarities during construction design, notably, hydrodynamic effects for structural strength calculations, parameters and dynamics of center mass motion and vehicle revolution around center mass for estimation water landing conditions, as well as vehicle stability after landing.
Obtained results confirm suitability of using Flow Vision program complex for water landing vehicle investigations and investigations of influence of different landing regimes through wide initial condition change range, that permits considerably decrease extent of expensive experimental tests and realize landing conditions which are sufficiently complicated for realizing in model physical experiments.
-
On quality of object tracking algorithms
Computer Research and Modeling, 2012, v. 4, no. 2, pp. 303-313Views (last year): 20. Citations: 9 (RSCI).Object movement on a video is classified on the regular (object movement on continuous trajectory) and non-regular (trajectory breaks due to object occlusions by other objects, object jumps and others). In the case of regular object movement a tracker is considered as a dynamical system that enables to use conditions of existence, uniqueness, and stability of the dynamical system solution. This condition is used as the correctness criterion of the tracking process. Also, quantitative criterion for correct mean-shift tracking assessment based on the Lipchitz condition is suggested. Results are generalized for arbitrary tracker.
-
Dual-pass Feature-Fused SSD model for detecting multi-scale images of workers on the construction site
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 57-73When recognizing workers on images of a construction site obtained from surveillance cameras, a situation is typical in which the objects of detection have a very different spatial scale relative to each other and other objects. An increase in the accuracy of detection of small objects can be achieved by using the Feature-Fused modification of the SSD detector. Together with the use of overlapping image slicing on the inference, this model copes well with the detection of small objects. However, the practical use of this approach requires manual adjustment of the slicing parameters. This reduces the accuracy of object detection on scenes that differ from the scenes used in training, as well as large objects. In this paper, we propose an algorithm for automatic selection of image slicing parameters depending on the ratio of the characteristic geometric dimensions of objects in the image. We have developed a two-pass version of the Feature-Fused SSD detector for automatic determination of optimal image slicing parameters. On the first pass, a fast truncated version of the detector is used, which makes it possible to determine the characteristic sizes of objects of interest. On the second pass, the final detection of objects with slicing parameters selected after the first pass is performed. A dataset was collected with images of workers on a construction site. The dataset includes large, small and diverse images of workers. To compare the detection results for a one-pass algorithm without splitting the input image, a one-pass algorithm with uniform splitting, and a two-pass algorithm with the selection of the optimal splitting, we considered tests for the detection of separately large objects, very small objects, with a high density of objects both in the foreground and in the background, only in the background. In the range of cases we have considered, our approach is superior to the approaches taken in comparison, allows us to deal well with the problem of double detections and demonstrates a quality of 0.82–0.91 according to the mAP (mean Average Precision) metric.
-
Approaches for image processing in the decision support system of the center for automated recording of administrative offenses of the road traffic
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 405-415We suggested some approaches for solving image processing tasks in the decision support system (DSS) of the Center for Automated Recording of Administrative Offenses of the Road Traffic (CARAO). The main task of this system is to assist the operator in obtaining accurate information about the vehicle registration plate and the vehicle brand/model based on images obtained from the photo and video recording systems. We suggested the approach for vehicle registration plate recognition and brand/model classification on the images based on modern neural network models. LPRNet neural network model supplemented by Spatial Transformer Layer was used to recognize the vehicle registration plate. The ResNeXt-101-32x8d neural network model was used to classify for vehicle brand/model. We suggested the approach to construct the training set for the neural network of vehicle registration plate recognition. The approach is based on computer vision methods and machine learning algorithms. The SIFT algorithm was used to detect and describe local features on images with the vehicle registration plate. DBSCAN clustering was used to detect and delete outliers in such local features. The accuracy of vehicle registration plate recognition was 96% on the testing set. We suggested the approach to improve the efficiency of using the ResNeXt-101-32x8d model at additional training and classification stages. The approach is based on the new architecture of convolutional neural networks with “freezing” weight coefficients of convolutional layers, an additional convolutional layer for parallelizing the classification process, and a set of binary classifiers at the output. This approach significantly reduced the time of additional training of neural network when new vehicle brand/model classification was needed. The final accuracy of vehicle brand/model classification was 99% on the testing set. The proposed approaches were tested and implemented in the DSS of the CARAO of the Republic of Tatarstan.
-
The analysis of images in control systems of unmanned automobiles on the base of energy features model
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 369-376Views (last year): 31. Citations: 1 (RSCI).The article shows the relevance of research work in the field of creating control systems for unmanned vehicles based on computer vision technologies. Computer vision tools are used to solve a large number of different tasks, including to determine the location of the car, detect obstacles, determine a suitable parking space. These tasks are resource intensive and have to be performed in real time. Therefore, it is important to develop effective models, methods and tools that ensure the achievement of the required time and accuracy for use in unmanned vehicle control systems. In this case, the choice of the image representation model is important. In this paper, we consider a model based on the wavelet transform, which makes it possible to form features characterizing the energy estimates of the image points and reflecting their significance from the point of view of the contribution to the overall image energy. To form a model of energy characteristics, a procedure is performed based on taking into account the dependencies between the wavelet coefficients of various levels and the application of heuristic adjustment factors for strengthening or weakening the influence of boundary and interior points. On the basis of the proposed model, it is possible to construct descriptions of images their characteristic features for isolating and analyzing, including for isolating contours, regions, and singular points. The effectiveness of the proposed approach to image analysis is due to the fact that the objects in question, such as road signs, road markings or car numbers that need to be detected and identified, are characterized by the relevant features. In addition, the use of wavelet transforms allows to perform the same basic operations to solve a set of tasks in onboard unmanned vehicle systems, including for tasks of primary processing, segmentation, description, recognition and compression of images. The such unified approach application will allow to reduce the time for performing all procedures and to reduce the requirements for computing resources of the on-board system of an unmanned vehicle.
-
Technology for collecting initial data for constructing models for assessing the functional state of a human by pupil's response to illumination changes in the solution of some problems of transport safety
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 417-427This article solves the problem of developing a technology for collecting initial data for building models for assessing the functional state of a person. This condition is assessed by the pupil response of a person to a change in illumination based on the pupillometry method. This method involves the collection and analysis of initial data (pupillograms), presented in the form of time series characterizing the dynamics of changes in the human pupils to a light impulse effect. The drawbacks of the traditional approach to the collection of initial data using the methods of computer vision and smoothing of time series are analyzed. Attention is focused on the importance of the quality of the initial data for the construction of adequate mathematical models. The need for manual marking of the iris and pupil circles is updated to improve the accuracy and quality of the initial data. The stages of the proposed technology for collecting initial data are described. An example of the obtained pupillogram is given, which has a smooth shape and does not contain outliers, noise, anomalies and missing values. Based on the presented technology, a software and hardware complex has been developed, which is a collection of special software with two main modules, and hardware implemented on the basis of a Raspberry Pi 4 Model B microcomputer, with peripheral equipment that implements the specified functionality. To evaluate the effectiveness of the developed technology, models of a single-layer perspetron and a collective of neural networks are used, for the construction of which the initial data on the functional state of intoxication of a person were used. The studies have shown that the use of manual marking of the initial data (in comparison with automatic methods of computer vision) leads to a decrease in the number of errors of the 1st and 2nd years of the kind and, accordingly, to an increase in the accuracy of assessing the functional state of a person. Thus, the presented technology for collecting initial data can be effectively used to build adequate models for assessing the functional state of a person by pupillary response to changes in illumination. The use of such models is relevant in solving individual problems of ensuring transport security, in particular, monitoring the functional state of drivers.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"