Результаты поиска по 'networking':
Найдено статей: 112
  1. Kotliarova E.V., Krivosheev K.Yu., Gasnikova E.V., Sharovatova Y.I., Shurupov A.V.
    Proof of the connection between the Backman model with degenerate cost functions and the model of stable dynamics
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 335-342

    Since 1950s the field of city transport modelling has progressed rapidly. The first equilibrium distribution models of traffic flow appeared. The most popular model (which is still being widely used) was the Beckmann model, based on the two Wardrop principles. The core of the model could be briefly described as the search for the Nash equilibrium in a population demand game, in which losses of agents (drivers) are calculated based on the chosen path and demands of this path with correspondences being fixed. The demands (costs) of a path are calculated as the sum of the demands of different path segments (graph edges), that are included in the path. The costs of an edge (edge travel time) are determined by the amount of traffic on this edge (more traffic means larger travel time). The flow on a graph edge is determined by the sum of flows over all paths passing through the given edge. Thus, the cost of traveling along a path is determined not only by the choice of the path, but also by the paths other drivers have chosen. Thus, it is a standard game theory task. The way cost functions are constructed allows us to narrow the search for equilibrium to solving an optimization problem (game is potential in this case). If the cost functions are monotone and non-decreasing, the optimization problem is convex. Actually, different assumptions about the cost functions form different models. The most popular model is based on the BPR cost function. Such functions are massively used in calculations of real cities. However, in the beginning of the XXI century, Yu. E. Nesterov and A. de Palma showed that Beckmann-type models have serious weak points. Those could be fixed using the stable dynamics model, as it was called by the authors. The search for equilibrium here could be also reduced to an optimization problem, moreover, the problem of linear programming. In 2013, A.V.Gasnikov discovered that the stable dynamics model can be obtained by a passage to the limit in the Beckmann model. However, it was made only for several practically important, but still special cases. Generally, the question if this passage to the limit is possible remains open. In this paper, we provide the justification of the possibility of the above-mentioned passage to the limit in the general case, when the cost function for traveling along the edge as a function of the flow along the edge degenerates into a function equal to fixed costs until the capacity is reached and it is equal to plus infinity when the capacity is exceeded.

  2. Terekhin A.T., Budilova E.V., Karpenko M.P., Kachalova L.M., Chmyhova E.V.
    Lyapunov function as a tool for the study of cognitive and regulatory processes in organism
    Computer Research and Modeling, 2009, v. 1, no. 4, pp. 449-456

    Cognitive and regulatory processes in organism are ensured by the functioning of several different network systems — neural, endocrine, immune, and gene ones. These systems are, however, closely related and form a single integrated neurogenohumoral cognitive-regulatory dynamic system of organism. A review of publications is given which shows that it is possible to associate with this dynamic system a corresponding Lyapunov function (energy function, potential function) and that analyzing this function allows, due to its geometrical insight, to easily discover a set of general properties of cognitive and regulatory functioning of organism.

    Views (last year): 4. Citations: 5 (RSCI).
  3. Suvorov N.V., Shleymovich M.P.
    Mathematical model of the biometric iris recognition system
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 629-639

    Automatic recognition of personal identity by biometric features is based on unique peculiarities or characteristics of people. Biometric identification process consist in making of reference templates and comparison with new input data. Iris pattern recognition algorithms presents high accuracy and low identification errors percent on practice. Iris pattern advantages over other biometric features are determined by its high degree of freedom (nearly 249), excessive density of unique features and constancy. High recognition reliability level is very important because it provides search in big databases. Unlike one-to-one check mode that is applicable only to small calculation count it allows to work in one-to-many identification mode. Every biometric identification system appears to be probabilistic and qualitative characteristics description utilizes such parameters as: recognition accuracy, false acceptance rate and false rejection rate. These characteristics allows to compare identity recognition methods and asses the system performance under any circumstances. This article explains the mathematical model of iris pattern biometric identification and its characteristics. Besides, there are analyzed results of comparison of model and real recognition process. To make such analysis there was carried out the review of existing iris pattern recognition methods based on different unique features vector. The Python-based software package is described below. It builds-up probabilistic distributions and generates large test data sets. Such data sets can be also used to educate the identification decision making neural network. Furthermore, synergy algorithm of several iris pattern identification methods was suggested to increase qualitative characteristics of system in comparison with the use of each method separately.

  4. Grebenkin I.V., Alekseenko A.E., Gaivoronskiy N.A., Ignatov M.G., Kazennov A.M., Kozakov D.V., Kulagin A.P., Kholodov Y.A.
    Ensemble building and statistical mechanics methods for MHC-peptide binding prediction
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1383-1395

    The proteins of the Major Histocompatibility Complex (MHC) play a key role in the functioning of the adaptive immune system, and the identification of peptides that bind to them is an important step in the development of vaccines and understanding the mechanisms of autoimmune diseases. Today, there are a number of methods for predicting the binding of a particular MHC allele to a peptide. One of the best such methods is NetMHCpan-4.0, which is based on an ensemble of artificial neural networks. This paper presents a methodology for qualitatively improving the underlying neural network underlying NetMHCpan-4.0. The proposed method uses the ensemble construction technique and adds as input an estimate of the Potts model taken from static mechanics, which is a generalization of the Ising model. In the general case, the model reflects the interaction of spins in the crystal lattice. Within the framework of the proposed method, the model is used to better represent the physical nature of the interaction of proteins included in the complex. To assess the interaction of the MHC + peptide complex, we use a two-dimensional Potts model with 20 states (corresponding to basic amino acids). Solving the inverse problem using data on experimentally confirmed interacting pairs, we obtain the values of the parameters of the Potts model, which we then use to evaluate a new pair of MHC + peptide, and supplement this value with the input data of the neural network. This approach, combined with the ensemble construction technique, allows for improved prediction accuracy, in terms of the positive predictive value (PPV) metric, compared to the baseline model.

  5. Bulatov A.A., Syssoev A.A., Iudin D.I.
    Simulation of lightning initiation on the basis of dynamical grap
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 125-147

    Despite numerous achievements of modern science the problem of lightning initiation in an electrodeless thundercloud, the maximum electric field strength inside which is approximately an order of magnitude lower than the dielectric strength of air, remains unsolved. Although there is no doubt that discharge activity begins with the appearance of positive streamers, which can develop under approximately half the threshold electric field as compared to negative ones, it remains unexplored how cold weakly conducting streamer systems unite in a joint hot well-conducting leader channel capable of self-propagation due to effective polarization in a relatively small external field. In this study, we present a self-organizing transport model which is applied to the case of electric discharge tree formation in a thundercloud. So, the model is aimed at numerical simulation of the initial stage of lightning discharge development. Among the innovative features of the model are the absence of grid spacing, high spatiotemporal resolution, and consideration of temporal evolution of electrical parameters of transport channels. The model takes into account the widely known asymmetry between threshold fields needed for positive and negative streamers development. In our model, the resulting well-conducting leader channel forms due to collective effect of combining the currents of tens of thousands of interacting streamer channels each of which initially has negligible conductivity and temperature that does not differ from the ambient one. The model bipolar tree is a directed graph (it has both positive and negative parts). It has morphological and electrodynamic characteristics which are intermediate between laboratory long spark and developed lightning. The model has universal character which allows to use it in other tasks related to the study of transport (in the broad sense of the word) networks.

  6. This article solves the problem of developing a technology for collecting initial data for building models for assessing the functional state of a person. This condition is assessed by the pupil response of a person to a change in illumination based on the pupillometry method. This method involves the collection and analysis of initial data (pupillograms), presented in the form of time series characterizing the dynamics of changes in the human pupils to a light impulse effect. The drawbacks of the traditional approach to the collection of initial data using the methods of computer vision and smoothing of time series are analyzed. Attention is focused on the importance of the quality of the initial data for the construction of adequate mathematical models. The need for manual marking of the iris and pupil circles is updated to improve the accuracy and quality of the initial data. The stages of the proposed technology for collecting initial data are described. An example of the obtained pupillogram is given, which has a smooth shape and does not contain outliers, noise, anomalies and missing values. Based on the presented technology, a software and hardware complex has been developed, which is a collection of special software with two main modules, and hardware implemented on the basis of a Raspberry Pi 4 Model B microcomputer, with peripheral equipment that implements the specified functionality. To evaluate the effectiveness of the developed technology, models of a single-layer perspetron and a collective of neural networks are used, for the construction of which the initial data on the functional state of intoxication of a person were used. The studies have shown that the use of manual marking of the initial data (in comparison with automatic methods of computer vision) leads to a decrease in the number of errors of the 1st and 2nd years of the kind and, accordingly, to an increase in the accuracy of assessing the functional state of a person. Thus, the presented technology for collecting initial data can be effectively used to build adequate models for assessing the functional state of a person by pupillary response to changes in illumination. The use of such models is relevant in solving individual problems of ensuring transport security, in particular, monitoring the functional state of drivers.

  7. Umavovskiy A.V.
    Data-driven simulation of a two-phase flow in heterogenous porous media
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 779-792

    The numerical methods used to simulate the evolution of hydrodynamic systems require the considerable use of computational resources thus limiting the number of possible simulations. The data-driven simulation technique is one promising approach to the development of heuristic models, which may speed up the study of such models. In this approach, machine learning methods are used to tune the weights of an artificial neural network that predicts the state of a physical system at a given point in time based on initial conditions. This article describes an original neural network architecture and a novel multi-stage training procedure which create a heuristic model of a two-phase flow in a heterogeneous porous medium. The neural network-based model predicts the states of the grid cells at an arbitrary timestep (within the known constraints), taking in only the initial conditions: the properties of the heterogeneous permeability of the medium and the location of sources and sinks. The proposed model requires orders of magnitude less processor time in comparison with the classical numerical method, which served as a criterion for evaluating the effectiveness of the trained model. The proposed architecture includes a number of subnets trained in various combinations on several datasets. The techniques of adversarial training and weight transfer are utilized.

  8. Koltsov Y.V., Boboshko E.V.
    Comparative analysis of optimization methods for electrical energy losses interval evaluation problem
    Computer Research and Modeling, 2013, v. 5, no. 2, pp. 231-239

    This article is dedicated to a comparison analysis of optimization methods, in order to perform an interval estimation of electrical energy technical losses in distribution networks of voltage 6–20 kV. The issue of interval evaluation is represented as a multi-dimensional conditional minimization/maximization problem with implicit target function. A number of numerical optimization methods of first and zero orders is observed, with the aim of determining the most suitable for the problem of interest. The desired algorithm is BOBYQA, in which the target function is replaced with its quadratic approximation in some trusted region.

    Views (last year): 2. Citations: 1 (RSCI).
  9. Tumanyan A.G., Bartsev S.I.
    Simple behavioral model of imprint formation
    Computer Research and Modeling, 2014, v. 6, no. 5, pp. 793-802

    Formation of adequate behavioral patterns in condition of the unknown environment carried out through exploratory behavior. At the same time the rapid formation of an acceptable pattern is more preferable than a long elaboration perfect pattern through repeat play learning situation. In extreme situations, phenomenon of imprinting is observed — instant imprinting of behavior pattern, which ensure the survival of individuals. In this paper we propose a hypothesis and imprint model when trained on a single successful pattern of virtual robot's neural network demonstrates the effective functioning. Realism of the model is estimated by checking the stability of playback behavior pattern to perturbations situation imprint run.

    Views (last year): 5. Citations: 2 (RSCI).
  10. Kiselev M.V.
    Exploration of 2-neuron memory units in spiking neural networks
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 401-416

    Working memory mechanisms in spiking neural networks consisting of leaky integrate-and-fire neurons with adaptive threshold and synaptic plasticity are studied in this work. Moderate size networks including thousands of neurons were explored. Working memory is a network ability to keep in its state the information about recent stimuli presented to the network such that this information is sufficient to determine which stimulus has been presented. In this study, network state is defined as the current characteristics of network activity only — without internal state of its neurons. In order to discover the neuronal structures serving as a possible substrate of the memory mechanism, optimization of the network parameters and structure using genetic algorithm was carried out. Two kinds of neuronal structures with the desired properties were found. These are neuron pairs mutually connected by strong synaptic links and long tree-like neuronal ensembles. It was shown that only the neuron pairs are suitable for efficient and reliable implementation of working memory. Properties of such memory units and structures formed by them are explored in the present study. It is shown that characteristics of the studied two-neuron memory units can be set easily by the respective choice of the parameters of its neurons and synaptic connections. Besides that, this work demonstrates that ensembles of these structures can provide the network with capability of unsupervised learning to recognize patterns in the input signal.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"