Результаты поиска по 'networking':
Найдено статей: 112
  1. Sabirov A.I., Katasev A.S., Dagaeva M.V.
    A neural network model for traffic signs recognition in intelligent transport systems
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 429-435

    This work analyzes the problem of traffic signs recognition in intelligent transport systems. The basic concepts of computer vision and image recognition tasks are considered. The most effective approach for solving the problem of analyzing and recognizing images now is the neural network method. Among all kinds of neural networks, the convolutional neural network has proven itself best. Activation functions such as Relu and SoftMax are used to solve the classification problem when recognizing traffic signs. This article proposes a technology for recognizing traffic signs. The choice of an approach for solving the problem based on a convolutional neural network due to the ability to effectively solve the problem of identifying essential features and classification. The initial data for the neural network model were prepared and a training sample was formed. The Google Colaboratory cloud service with the external libraries for deep learning TensorFlow and Keras was used as a platform for the intelligent system development. The convolutional part of the network is designed to highlight characteristic features in the image. The first layer includes 512 neurons with the Relu activation function. Then there is the Dropout layer, which is used to reduce the effect of overfitting the network. The output fully connected layer includes four neurons, which corresponds to the problem of recognizing four types of traffic signs. An intelligent traffic sign recognition system has been developed and tested. The used convolutional neural network included four stages of convolution and subsampling. Evaluation of the efficiency of the traffic sign recognition system using the three-block cross-validation method showed that the error of the neural network model is minimal, therefore, in most cases, new images will be recognized correctly. In addition, the model has no errors of the first kind, and the error of the second kind has a low value and only when the input image is very noisy.

  2. Kurakin P.V.
    Technoscape: multi-agent model for evolution of network of cities, joined by production and trade links
    Computer Research and Modeling, 2022, v. 14, no. 1, pp. 163-178

    The paper presents agent-based model for city formation named Technoscape which is both local and nonlocal. Technoscape can, to a certain degree, be also assumed as a model for emergence of global economy. The current version of the model implements very simple way of agents’ behavior and interaction, still the model provides rather interesting spatio-temporal patterns.

    Locality and non-locality mean here the spatial features of the way the agents interact with each other and with geographical space upon which the evolution takes place. Technoscape agent is some conventional artisan, family, or а producing and trading firm, while there is no difference between production and trade. Agents are located upon and move through bounded two-dimensional space divided into square cells. The model demonstrates processes of agents’ concentration in a small set of cells, which is interpreted as «city» formation. Agents are immortal, they don’t mutate and evolve, though this is interesting perspective for the evolution of the model itself.

    Technoscape provides some distinctively new type of self-organization. Partially, this type of selforganization resembles the behavior of segregation model by Thomas Shelling, still that model has evolution rules substantially different from Technoscape. In Shelling model there exist avalanches still simple equilibria exist if no new agents are added to the game board, while in Technoscape no such equilibria exist. At best, we can observe quasi-equilibrium, slowly changing global states.

    One non-trivial phenomenon Technoscape exhibits, which also contrasts to Shelling segregation model, is the ability of agents to concentrate in local cells (interpreted as cities) even explicitly and totally ignoring local interactions, using non-local interactions only.

    At the same time, while the agents tend to concentrate in large one-cell cities, large scale of such cities does not guarantee them from decay: there always exists a process of «enticement» of agents and their flow to new cities.

  3. Deev A.A., Kalshchikov A.A.
    Coherent constant delay transceiver for a synchronous fiber optic network
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 141-155

    This paper proposes the implementation of a coherent transceiver with a constant delay and the ability to select any clock frequency grid used for clocking peripheral DACs and ADCs, tasks of device synchronization and data transmission. The choice of the required clock frequency grid directly affects the data transfer rate in the network, however, it allows one to flexibly configure the network for the tasks of transmitting clock signals and subnanosecond generation of sync signals on all devices in the network. A method for increasing the synchronization accuracy to tenths of nanoseconds by using digital phase detectors and a Phase Locked Loop (PLL) system on the slave device is proposed. The use of high-speed fiber-optic communication lines (FOCL) for synchronization tasks allows simultaneously exchanging control commands and signaling data. To simplify and reduce the cost of devices of a synchronous network of transceivers, it is proposed to use a clock signal restored from a data transmission line to filter phase noise and form a frequency grid in the PLL system for heterodyne signals and clock peripheral devices, including DAC and ADC. The results of multiple synchronization tests in the proposed synchronous network are presented.

  4. The creation of a virtual laboratory stand that allows one to obtain reliable characteristics that can be proven as actual, taking into account errors and noises (which is the main distinguishing feature of a computational experiment from model studies) is one of the main problems of this work. It considers the following task: there is a rectangular waveguide in the single operating mode, on the wide wall of which a technological hole is cut, through which a sample for research is placed into the cavity of the transmission line. The recovery algorithm is as follows: the laboratory measures the network parameters (S11 and/or S21) in the transmission line with the sample. In the computer model of the laboratory stand, the sample geometry is reconstructed and an iterative process of optimization (or sweeping) of the electrophysical parameters is started, the mask of this process is the experimental data, and the stop criterion is the interpretive estimate of proximity (or residual). It is important to note that the developed computer model, along with its apparent simplicity, is initially ill-conditioned. To set up a computational experiment, the Comsol modeling environment is used. The results of the computational experiment with a good degree of accuracy coincided with the results of laboratory studies. Thus, experimental verification was carried out for several significant components, both the computer model in particular and the algorithm for restoring the target parameters in general. It is important to note that the computer model developed and described in this work may be effectively used for a computational experiment to restore the full dielectric parameters of a complex geometry target. Weak bianisotropy effects can also be detected, including chirality, gyrotropy, and material nonreciprocity. The resulting model is, by definition, incomplete, but its completeness is the highest of the considered options, while at the same time, the resulting model is well conditioned. Particular attention in this work is paid to the modeling of a coaxial-waveguide transition, it is shown that the use of a discrete-element approach is preferable to the direct modeling of the geometry of a microwave device.

  5. Salenek I.A., Seliverstov Y.A., Seliverstov S.A., Sofronova E.A.
    Improving the quality of route generation in SUMO based on data from detectors using reinforcement learning
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 137-146

    This work provides a new approach for constructing high-precision routes based on data from transport detectors inside the SUMO traffic modeling package. Existing tools such as flowrouter and routeSampler have a number of disadvantages, such as the lack of interaction with the network in the process of building routes. Our rlRouter uses multi-agent reinforcement learning (MARL), where the agents are incoming lanes and the environment is the road network. By performing actions to launch vehicles, agents receive a reward for matching data from transport detectors. Parameter Sharing DQN with the LSTM backbone of the Q-function was used as an algorithm for multi-agent reinforcement learning.

    Since the rlRouter is trained inside the SUMO simulation, it can restore routes better by taking into account the interaction of vehicles within the network with each other and with the network infrastructure. We have modeled diverse traffic situations on three different junctions in order to compare the performance of SUMO’s routers with the rlRouter. We used Mean Absoluter Error (MAE) as the measure of the deviation from both cumulative detectors and routes data. The rlRouter achieved the highest compliance with the data from the detectors. We also found that by maximizing the reward for matching detectors, the resulting routes also get closer to the real ones. Despite the fact that the routes recovered using rlRouter are superior to the routes obtained using SUMO tools, they do not fully correspond to the real ones, due to the natural limitations of induction-loop detectors. To achieve more plausible routes, it is necessary to equip junctions with other types of transport counters, for example, camera detectors.

  6. Kalitin K.Y., Nevzorov A.A., Spasov A.A., Mukha O.Y.
    Deep learning analysis of intracranial EEG for recognizing drug effects and mechanisms of action
    Computer Research and Modeling, 2024, v. 16, no. 3, pp. 755-772

    Predicting novel drug properties is fundamental to polypharmacology, repositioning, and the study of biologically active substances during the preclinical phase. The use of machine learning, including deep learning methods, for the identification of drug – target interactions has gained increasing popularity in recent years.

    The objective of this study was to develop a method for recognizing psychotropic effects and drug mechanisms of action (drug – target interactions) based on an analysis of the bioelectrical activity of the brain using artificial intelligence technologies.

    Intracranial electroencephalographic (EEG) signals from rats were recorded (4 channels at a sampling frequency of 500 Hz) after the administration of psychotropic drugs (gabapentin, diazepam, carbamazepine, pregabalin, eslicarbazepine, phenazepam, arecoline, pentylenetetrazole, picrotoxin, pilocarpine, chloral hydrate). The signals were divided into 2-second epochs, then converted into $2000\times 4$ images and input into an autoencoder. The output of the bottleneck layer was subjected to classification and clustering using t-SNE, and then the distances between resulting clusters were calculated. As an alternative, an approach based on feature extraction with dimensionality reduction using principal component analysis and kernel support vector machine (kSVM) classification was used. Models were validated using 5-fold cross-validation.

    The classification accuracy obtained for 11 drugs during cross-validation was $0.580 \pm 0.021$, which is significantly higher than the accuracy of the random classifier $(0.091 \pm 0.045, p < 0.0001)$ and the kSVM $(0.441 \pm 0.035, p < 0.05)$. t-SNE maps were generated from the bottleneck parameters of intracranial EEG signals. The relative proximity of the signal clusters in the parametric space was assessed.

    The present study introduces an original method for biopotential-mediated prediction of effects and mechanism of action (drug – target interaction). This method employs convolutional neural networks in conjunction with a modified selective parameter reduction algorithm. Post-treatment EEGs were compressed into a unified parameter space. Using a neural network classifier and clustering, we were able to recognize the patterns of neuronal response to the administration of various psychotropic drugs.

  7. Bogdanov A.V., Gankevich I.G., Gayduchok V.Yu., Yuzhanin N.V.
    Running applications on a hybrid cluster
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 475-483

    A hybrid cluster implies the use of computational devices with radically different architectures. Usually, these are conventional CPU architecture (e.g. x86_64) and GPU architecture (e. g. NVIDIA CUDA). Creating and exploiting such a cluster requires some experience: in order to harness all computational power of the described system and get substantial speedup for computational tasks many factors should be taken into account. These factors consist of hardware characteristics (e.g. network infrastructure, a type of data storage, GPU architecture) as well as software stack (e.g. MPI implementation, GPGPU libraries). So, in order to run scientific applications GPU capabilities, software features, task size and other factors should be considered.

    This report discusses opportunities and problems of hybrid computations. Some statistics from tests programs and applications runs will be demonstrated. The main focus of interest is open source applications (e. g. OpenFOAM) that support GPGPU (with some parts rewritten to use GPGPU directly or by replacing libraries).

    There are several approaches to organize heterogeneous computations for different GPU architectures out of which CUDA library and OpenCL framework are compared. CUDA library is becoming quite typical for hybrid systems with NVIDIA cards, but OpenCL offers portability opportunities which can be a determinant factor when choosing framework for development. We also put emphasis on multi-GPU systems that are often used to build hybrid clusters. Calculations were performed on a hybrid cluster of SPbU computing center.

    Views (last year): 4.
  8. Shepelev V.D., Kostyuchenkov N.V., Shepelev S.D., Alieva A.A., Makarova I.V., Buyvol P.A., Parsin G.A.
    The development of an intelligent system for recognizing the volume and weight characteristics of cargo
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 437-450

    Industrial imaging or “machine vision” is currently a key technology in many industries as it can be used to optimize various processes. The purpose of this work is to create a software and hardware complex for measuring the overall and weight characteristics of cargo based on an intelligent system using neural network identification methods that allow one to overcome the technological limitations of similar complexes implemented on ultrasonic and infrared measuring sensors. The complex to be developed will measure cargo without restrictions on the volume and weight characteristics of cargo to be tariffed and sorted within the framework of the warehouse complexes. The system will include an intelligent computer program that determines the volume and weight characteristics of cargo using the machine vision technology and an experimental sample of the stand for measuring the volume and weight of cargo.

    We analyzed the solutions to similar problems. We noted that the disadvantages of the studied methods are very high requirements for the location of the camera, as well as the need for manual operations when calculating the dimensions, which cannot be automated without significant modifications. In the course of the work, we investigated various methods of object recognition in images to carry out subject filtering by the presence of cargo and measure its overall dimensions. We obtained satisfactory results when using cameras that combine both an optical method of image capture and infrared sensors. As a result of the work, we developed a computer program allowing one to capture a continuous stream from Intel RealSense video cameras with subsequent extraction of a three-dimensional object from the designated area and to calculate the overall dimensions of the object. At this stage, we analyzed computer vision techniques; developed an algorithm to implement the task of automatic measurement of goods using special cameras and the software allowing one to obtain the overall dimensions of objects in automatic mode.

    Upon completion of the work, this development can be used as a ready-made solution for transport companies, logistics centers, warehouses of large industrial and commercial enterprises.

  9. Ansori Moch.F., Sumarti N.N., Sidarto K.A., Gunadi I.I.
    An Algorithm for Simulating the Banking Network System and Its Application for Analyzing Macroprudential Policy
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1275-1289

    Modeling banking systems using a network approach has received growing attention in recent years. One of the notable models is that developed by Iori et al, who proposed a banking system model for analyzing systemic risks in interbank networks. The model is built based on the simple dynamics of several bank balance sheet variables such as deposit, equity, loan, liquid asset, and interbank lending (or borrowing) in the form of difference equations. Each bank faces random shocks in deposits and loans. The balance sheet is updated at the beginning or end of each period. In the model, banks are grouped into either potential lenders or borrowers. The potential borrowers are those that have lack of liquidity and the potential lenders are those which have excess liquids after dividend payment and channeling new investment. The borrowers and the lenders are connected through the interbank market. Those borrowers have some percentage of linkage to random potential lenders for borrowing funds to maintain their safety net of the liquidity. If the demand for borrowing funds can meet the supply of excess liquids, then the borrower bank survives. If not, they are deemed to be in default and will be removed from the banking system. However, in their paper, most part of the interbank borrowing-lending mechanism is described qualitatively rather than by detailed mathematical or computational analysis. Therefore, in this paper, we enhance the mathematical parts of borrowing-lending in the interbank market and present an algorithm for simulating the model. We also perform some simulations to analyze the effects of the model’s parameters on banking stability using the number of surviving banks as the measure. We apply this technique to analyze the effects of a macroprudential policy called loan-to-deposit ratio based reserve requirement for banking stability.

  10. Bernadotte A., Mazurin A.D.
    Optimization of the brain command dictionary based on the statistical proximity criterion in silent speech recognition task
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 675-690

    In our research, we focus on the problem of classification for silent speech recognition to develop a brain– computer interface (BCI) based on electroencephalographic (EEG) data, which will be capable of assisting people with mental and physical disabilities and expanding human capabilities in everyday life. Our previous research has shown that the silent pronouncing of some words results in almost identical distributions of electroencephalographic signal data. Such a phenomenon has a suppressive impact on the quality of neural network model behavior. This paper proposes a data processing technique that distinguishes between statistically remote and inseparable classes in the dataset. Applying the proposed approach helps us reach the goal of maximizing the semantic load of the dictionary used in BCI.

    Furthermore, we propose the existence of a statistical predictive criterion for the accuracy of binary classification of the words in a dictionary. Such a criterion aims to estimate the lower and the upper bounds of classifiers’ behavior only by measuring quantitative statistical properties of the data (in particular, using the Kolmogorov – Smirnov method). We show that higher levels of classification accuracy can be achieved by means of applying the proposed predictive criterion, making it possible to form an optimized dictionary in terms of semantic load for the EEG-based BCIs. Furthermore, using such a dictionary as a training dataset for classification problems grants the statistical remoteness of the classes by taking into account the semantic and phonetic properties of the corresponding words and improves the classification behavior of silent speech recognition models.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"