Результаты поиска по 'vision system':
Найдено статей: 9
  1. Zatserkovnyy A.V., Nurminski E.A.
    Neural network analysis of transportation flows of urban aglomeration using the data from public video cameras
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 305-318

    Correct modeling of complex dynamics of urban transportation flows requires the collection of large volumes of empirical data to specify types of the modes and their identification. At the same time, setting a large number of observation posts is expensive and technically not always feasible. All this results in insufficient factographic support for the traffic control systems as well as for urban planners with the obvious consequences for the quality of their decisions. As one of the means to provide large-scale data collection at least for the qualitative situation analysis, the wide-area video cameras are used in different situation centers. There they are analyzed by human operators who are responsible for observation and control. Some video cameras provided their videos for common access, which makes them a valuable resource for transportation studies. However, there are significant problems with getting qualitative data from such cameras, which relate to the theory and practice of image processing. This study is devoted to the practical application of certain mainstream neuro-networking technologies for the estimation of essential characteristics of actual transportation flows. The problems arising in processing these data are analyzed, and their solutions are suggested. The convolution neural networks are used for tracking, and the methods for obtaining basic parameters of transportation flows from these observations are studied. The simplified neural networks are used for the preparation of training sets for the deep learning neural network YOLOv4 which is later used for the estimation of speed and density of automobile flows.

  2. Nebaba S.G., Markov N.G.
    Convolutional neural networks of YOLO family for mobile computer vision systems
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 615-631

    The work analyzes known classes of convolutional neural network models and studies selected from them promising models for detecting flying objects in images. Object detection here refers to the detection, localization in space and classification of flying objects. The work conducts a comprehensive study of selected promising convolutional neural network models in order to identify the most effective ones from them for creating mobile real-time computer vision systems. It is shown that the most suitable models for detecting flying objects in images, taking into account the formulated requirements for mobile real-time computer vision systems, are models of the YOLO family, and five models from this family should be considered: YOLOv4, YOLOv4-Tiny, YOLOv4-CSP, YOLOv7 and YOLOv7-Tiny. An appropriate dataset has been developed for training, validation and comprehensive research of these models. Each labeled image of the dataset includes from one to several flying objects of four classes: “bird”, “aircraft-type unmanned aerial vehicle”, “helicopter-type unmanned aerial vehicle”, and “unknown object” (objects in airspace not included in the first three classes). Research has shown that all convolutional neural network models exceed the specified threshold value by the speed of detecting objects in the image, however, only the YOLOv4-CSP and YOLOv7 models partially satisfy the requirements of the accuracy of detection of flying objects. It was shown that most difficult object class to detect is the “bird” class. At the same time, it was revealed that the most effective model is YOLOv7, the YOLOv4-CSP model is in second place. Both models are recommended for use as part of a mobile real-time computer vision system with condition of additional training of these models on increased number of images with objects of the “bird” class so that they satisfy the requirement for the accuracy of detecting flying objects of each four classes.

  3. Vrazhnov D.A., Shapovalov A.V., Nikolaev V.V.
    On quality of object tracking algorithms
    Computer Research and Modeling, 2012, v. 4, no. 2, pp. 303-313

    Object movement on a video is classified on the regular (object movement on continuous trajectory) and non-regular (trajectory breaks due to object occlusions by other objects, object jumps and others). In the case of regular object movement a tracker is considered as a dynamical system that enables to use conditions of existence, uniqueness, and stability of the dynamical system solution. This condition is used as the correctness criterion of the tracking process. Also, quantitative criterion for correct mean-shift tracking assessment based on the Lipchitz condition is suggested. Results are generalized for arbitrary tracker.

    Views (last year): 20. Citations: 9 (RSCI).
  4. Sokolov S.V., Marshakov D.V., Reshetnikova I.V.
    High-precision estimation of the spatial orientation of the video camera of the vision system of the mobile robotic complex
    Computer Research and Modeling, 2025, v. 17, no. 1, pp. 93-107

    The efficiency of mobile robotic systems (MRS) that monitor the traffic situation, urban infrastructure, consequences of emergency situations, etc., directly depends on the quality of vision systems, which are the most important part of MRS. In turn, the accuracy of image processing in vision systems depends to a great extent on the accuracy of spatial orientation of the video camera placed on the MRS. However, when video cameras are placed on the MRS, the level of errors of their spatial orientation increases sharply, caused by wind and seismic vibrations, movement of the MRS over rough terrain, etc. In this connection, the paper considers a general solution to the problem of stochastic estimation of spatial orientation parameters of video cameras in conditions of both random mast vibrations and arbitrary character of MRS movement. Since the methods of solving this problem on the basis of satellite measurements at high intensity of natural and artificial radio interference (the methods of formation of which are constantly being improved) are not able to provide the required accuracy of the solution, the proposed approach is based on the use of autonomous means of measurement — inertial and non-inertial. But when using them, the problem of building and stochastic estimation of the general model of video camera motion arises, the complexity of which is determined by arbitrary motion of the video camera, random mast oscillations, measurement disturbances, etc. The problem of stochastic estimation of the general model of video camera motion arises. Due to the unsolved nature of this problem, the paper considers the synthesis of both the video camera motion model in the most general case and the stochastic estimation of its state parameters. The developed algorithm for joint estimation of the spatial orientation parameters of the video camera placed on the mast of the MRS is invariant to the nature of motion of the mast, the video camera, and the MRS itself, providing stability and the required accuracy of estimation under the most general assumptions about the nature of interference of the sensitive elements of the autonomous measuring complex used. The results of the numerical experiment allow us to conclude that the proposed approach can be practically applied to solve the problem of the current spatial orientation of MRS and video cameras placed on them using inexpensive autonomous measuring devices.

  5. Minnikhanov R.N., Anikin I.V., Dagaeva M.V., Asliamov T.I., Bolshakov T.E.
    Approaches for image processing in the decision support system of the center for automated recording of administrative offenses of the road traffic
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 405-415

    We suggested some approaches for solving image processing tasks in the decision support system (DSS) of the Center for Automated Recording of Administrative Offenses of the Road Traffic (CARAO). The main task of this system is to assist the operator in obtaining accurate information about the vehicle registration plate and the vehicle brand/model based on images obtained from the photo and video recording systems. We suggested the approach for vehicle registration plate recognition and brand/model classification on the images based on modern neural network models. LPRNet neural network model supplemented by Spatial Transformer Layer was used to recognize the vehicle registration plate. The ResNeXt-101-32x8d neural network model was used to classify for vehicle brand/model. We suggested the approach to construct the training set for the neural network of vehicle registration plate recognition. The approach is based on computer vision methods and machine learning algorithms. The SIFT algorithm was used to detect and describe local features on images with the vehicle registration plate. DBSCAN clustering was used to detect and delete outliers in such local features. The accuracy of vehicle registration plate recognition was 96% on the testing set. We suggested the approach to improve the efficiency of using the ResNeXt-101-32x8d model at additional training and classification stages. The approach is based on the new architecture of convolutional neural networks with “freezing” weight coefficients of convolutional layers, an additional convolutional layer for parallelizing the classification process, and a set of binary classifiers at the output. This approach significantly reduced the time of additional training of neural network when new vehicle brand/model classification was needed. The final accuracy of vehicle brand/model classification was 99% on the testing set. The proposed approaches were tested and implemented in the DSS of the CARAO of the Republic of Tatarstan.

  6. Shleymovich M.P., Dagaeva M.V., Katasev A.S., Lyasheva S.A., Medvedev M.V.
    The analysis of images in control systems of unmanned automobiles on the base of energy features model
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 369-376

    The article shows the relevance of research work in the field of creating control systems for unmanned vehicles based on computer vision technologies. Computer vision tools are used to solve a large number of different tasks, including to determine the location of the car, detect obstacles, determine a suitable parking space. These tasks are resource intensive and have to be performed in real time. Therefore, it is important to develop effective models, methods and tools that ensure the achievement of the required time and accuracy for use in unmanned vehicle control systems. In this case, the choice of the image representation model is important. In this paper, we consider a model based on the wavelet transform, which makes it possible to form features characterizing the energy estimates of the image points and reflecting their significance from the point of view of the contribution to the overall image energy. To form a model of energy characteristics, a procedure is performed based on taking into account the dependencies between the wavelet coefficients of various levels and the application of heuristic adjustment factors for strengthening or weakening the influence of boundary and interior points. On the basis of the proposed model, it is possible to construct descriptions of images their characteristic features for isolating and analyzing, including for isolating contours, regions, and singular points. The effectiveness of the proposed approach to image analysis is due to the fact that the objects in question, such as road signs, road markings or car numbers that need to be detected and identified, are characterized by the relevant features. In addition, the use of wavelet transforms allows to perform the same basic operations to solve a set of tasks in onboard unmanned vehicle systems, including for tasks of primary processing, segmentation, description, recognition and compression of images. The such unified approach application will allow to reduce the time for performing all procedures and to reduce the requirements for computing resources of the on-board system of an unmanned vehicle.

    Views (last year): 31. Citations: 1 (RSCI).
  7. Sabirov A.I., Katasev A.S., Dagaeva M.V.
    A neural network model for traffic signs recognition in intelligent transport systems
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 429-435

    This work analyzes the problem of traffic signs recognition in intelligent transport systems. The basic concepts of computer vision and image recognition tasks are considered. The most effective approach for solving the problem of analyzing and recognizing images now is the neural network method. Among all kinds of neural networks, the convolutional neural network has proven itself best. Activation functions such as Relu and SoftMax are used to solve the classification problem when recognizing traffic signs. This article proposes a technology for recognizing traffic signs. The choice of an approach for solving the problem based on a convolutional neural network due to the ability to effectively solve the problem of identifying essential features and classification. The initial data for the neural network model were prepared and a training sample was formed. The Google Colaboratory cloud service with the external libraries for deep learning TensorFlow and Keras was used as a platform for the intelligent system development. The convolutional part of the network is designed to highlight characteristic features in the image. The first layer includes 512 neurons with the Relu activation function. Then there is the Dropout layer, which is used to reduce the effect of overfitting the network. The output fully connected layer includes four neurons, which corresponds to the problem of recognizing four types of traffic signs. An intelligent traffic sign recognition system has been developed and tested. The used convolutional neural network included four stages of convolution and subsampling. Evaluation of the efficiency of the traffic sign recognition system using the three-block cross-validation method showed that the error of the neural network model is minimal, therefore, in most cases, new images will be recognized correctly. In addition, the model has no errors of the first kind, and the error of the second kind has a low value and only when the input image is very noisy.

  8. Shepelev V.D., Kostyuchenkov N.V., Shepelev S.D., Alieva A.A., Makarova I.V., Buyvol P.A., Parsin G.A.
    The development of an intelligent system for recognizing the volume and weight characteristics of cargo
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 437-450

    Industrial imaging or “machine vision” is currently a key technology in many industries as it can be used to optimize various processes. The purpose of this work is to create a software and hardware complex for measuring the overall and weight characteristics of cargo based on an intelligent system using neural network identification methods that allow one to overcome the technological limitations of similar complexes implemented on ultrasonic and infrared measuring sensors. The complex to be developed will measure cargo without restrictions on the volume and weight characteristics of cargo to be tariffed and sorted within the framework of the warehouse complexes. The system will include an intelligent computer program that determines the volume and weight characteristics of cargo using the machine vision technology and an experimental sample of the stand for measuring the volume and weight of cargo.

    We analyzed the solutions to similar problems. We noted that the disadvantages of the studied methods are very high requirements for the location of the camera, as well as the need for manual operations when calculating the dimensions, which cannot be automated without significant modifications. In the course of the work, we investigated various methods of object recognition in images to carry out subject filtering by the presence of cargo and measure its overall dimensions. We obtained satisfactory results when using cameras that combine both an optical method of image capture and infrared sensors. As a result of the work, we developed a computer program allowing one to capture a continuous stream from Intel RealSense video cameras with subsequent extraction of a three-dimensional object from the designated area and to calculate the overall dimensions of the object. At this stage, we analyzed computer vision techniques; developed an algorithm to implement the task of automatic measurement of goods using special cameras and the software allowing one to obtain the overall dimensions of objects in automatic mode.

    Upon completion of the work, this development can be used as a ready-made solution for transport companies, logistics centers, warehouses of large industrial and commercial enterprises.

  9. Segmentation of medical image is one of the most challenging tasks in analysis of medical image. It classifies the organs pixels or lesions from medical images background like MRI or CT scans, that is to provide critical information about the human organ’s volumes and shapes. In scientific imaging field, medical imaging is considered one of the most important topics due to the rapid and continuing progress in computerized medical image visualization, advances in analysis approaches and computer-aided diagnosis. Digital image processing becomes more important in healthcare field due to the growing use of direct digital imaging systems for medical diagnostics. Due to medical imaging techniques, approaches of image processing are now applicable in medicine. Generally, various transformations will be needed to extract image data. Also, a digital image can be considered an approximation of a real situation includes some uncertainty derived from the constraints on the process of vision. Since information on the level of uncertainty will influence an expert’s attitude. To address this challenge, we propose novel framework involving interval concept that consider a good tool for dealing with the uncertainty, In the proposed approach, the medical images are transformed into interval valued representation approach and entropies are defined for an image object and background. Then we determine a threshold for lower-bound image and for upper-bound image, and then calculate the mean value for the final output results. To demonstrate the effectiveness of the proposed framework, we evaluate it by using synthetic image and its ground truth. Experimental results showed how performance of the segmentation-based entropy threshold can be enhanced using proposed approach to overcome ambiguity.

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"