Результаты поиска по 'network':
Найдено статей: 119
  1. Bratsun D.A., Buzmakov M.D.
    Repressilator with time-delayed gene expression. Part II. Stochastic description
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 587-609

    The repressilator is the first genetic regulatory network in synthetic biology, which was artificially constructed in 2000. It is a closed network of three genetic elements $lacI$, $\lambda cI$ and $tetR$, which have a natural origin, but are not found in nature in such a combination. The promoter of each of the three genes controls the next cistron via the negative feedback, suppressing the expression of the neighboring gene. In our previous paper [Bratsun et al., 2018], we proposed a mathematical model of a delayed repressillator and studied its properties within the framework of a deterministic description. We assume that delay can be both natural, i.e. arises during the transcription / translation of genes due to the multistage nature of these processes, and artificial, i.e. specially to be introduced into the work of the regulatory network using gene engineering technologies. In this work, we apply the stochastic description of dynamic processes in a delayed repressilator, which is an important addition to deterministic analysis due to the small number of molecules involved in gene regulation. The stochastic study is carried out numerically using the Gillespie algorithm, which is modified for time delay systems. We present the description of the algorithm, its software implementation, and the results of benchmark simulations for a onegene delayed autorepressor. When studying the behavior of a repressilator, we show that a stochastic description in a number of cases gives new information about the behavior of a system, which does not reduce to deterministic dynamics even when averaged over a large number of realizations. We show that in the subcritical range of parameters, where deterministic analysis predicts the absolute stability of the system, quasi-regular oscillations may be excited due to the nonlinear interaction of noise and delay. Earlier, we have discovered within the framework of the deterministic description, that there exists a long-lived transient regime, which is represented in the phase space by a slow manifold. This mode reflects the process of long-term synchronization of protein pulsations in the work of the repressilator genes. In this work, we show that the transition to the cooperative mode of gene operation occurs a two order of magnitude faster, when the effect of the intrinsic noise is taken into account. We have obtained the probability distribution of moment when the phase trajectory leaves the slow manifold and have determined the most probable time for such a transition. The influence of the intrinsic noise of chemical reactions on the dynamic properties of the repressilator is discussed.

  2. Adamovskiy Y.R., Chertkov V.M., Bohush R.P.
    Model for building of the radio environment map for cognitive communication system based on LTE
    Computer Research and Modeling, 2022, v. 14, no. 1, pp. 127-146

    The paper is devoted to the secondary use of spectrum in telecommunication networks. It is emphasized that one of the solutions to this problem is the use of cognitive radio technologies and dynamic spectrum access for the successful functioning of which a large amount of information is required, including the parameters of base stations and network subscribers. Storage and processing of information should be carried out using a radio environment map, which is a spatio-temporal database of all activity in the network and allows you to determine the frequencies available for use at a given time. The paper presents a two-level model for forming a map of the radio environment of a cellular communication system LTE, in which the local and global levels are highlighted, which is described by the following parameters: a set of frequencies, signal attenuation, signal propagation map, grid step, current time count. The key objects of the model are the base station and the subscriber unit. The main parameters of the base station include: name, identifier, cell coordinates, range number, radiation power, numbers of connected subscriber devices, dedicated resource blocks. For subscriber devices, the following parameters are used: name, identifier, location, current coordinates of the device cell, base station identifier, frequency range, numbers of resource blocks for communication with the station, radiation power, data transmission status, list of numbers of the nearest stations, schedules movement and communication sessions of devices. An algorithm for the implementation of the model is presented, taking into account the scenarios of movement and communication sessions of subscriber devices. A method for calculating a map of the radio environment at a point on a coordinate grid, taking into account losses during the propagation of radio signals from emitting devices, is presented. The software implementation of the model is performed using the MatLab package. The approaches are described that allow to increase the speed of its work. In the simulation, the choice of parameters was carried out taking into account the data of the existing communication systems and the economy of computing resources. The experimental results of the algorithm for the formation of a radio environment map are demonstrated, confirming the correctness of the developed model.

  3. Kotliarova E.V., Krivosheev K.Yu., Gasnikova E.V., Sharovatova Y.I., Shurupov A.V.
    Proof of the connection between the Backman model with degenerate cost functions and the model of stable dynamics
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 335-342

    Since 1950s the field of city transport modelling has progressed rapidly. The first equilibrium distribution models of traffic flow appeared. The most popular model (which is still being widely used) was the Beckmann model, based on the two Wardrop principles. The core of the model could be briefly described as the search for the Nash equilibrium in a population demand game, in which losses of agents (drivers) are calculated based on the chosen path and demands of this path with correspondences being fixed. The demands (costs) of a path are calculated as the sum of the demands of different path segments (graph edges), that are included in the path. The costs of an edge (edge travel time) are determined by the amount of traffic on this edge (more traffic means larger travel time). The flow on a graph edge is determined by the sum of flows over all paths passing through the given edge. Thus, the cost of traveling along a path is determined not only by the choice of the path, but also by the paths other drivers have chosen. Thus, it is a standard game theory task. The way cost functions are constructed allows us to narrow the search for equilibrium to solving an optimization problem (game is potential in this case). If the cost functions are monotone and non-decreasing, the optimization problem is convex. Actually, different assumptions about the cost functions form different models. The most popular model is based on the BPR cost function. Such functions are massively used in calculations of real cities. However, in the beginning of the XXI century, Yu. E. Nesterov and A. de Palma showed that Beckmann-type models have serious weak points. Those could be fixed using the stable dynamics model, as it was called by the authors. The search for equilibrium here could be also reduced to an optimization problem, moreover, the problem of linear programming. In 2013, A.V.Gasnikov discovered that the stable dynamics model can be obtained by a passage to the limit in the Beckmann model. However, it was made only for several practically important, but still special cases. Generally, the question if this passage to the limit is possible remains open. In this paper, we provide the justification of the possibility of the above-mentioned passage to the limit in the general case, when the cost function for traveling along the edge as a function of the flow along the edge degenerates into a function equal to fixed costs until the capacity is reached and it is equal to plus infinity when the capacity is exceeded.

  4. Lyubushin A.A., Rodionov E.A.
    Analysis of predictive properties of ground tremor using Huang decomposition
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 939-958

    A method is proposed for analyzing the tremor of the earth’s surface, measured by means of space geodesy, in order to highlight the prognostic effects of seismicity activation. The method is illustrated by the example of a joint analysis of a set of synchronous time series of daily vertical displacements of the earth’s surface on the Japanese Islands for the time interval 2009–2023. The analysis is based on dividing the source data (1047 time series) into blocks (clusters of stations) and sequentially applying the principal component method. The station network is divided into clusters using the K-means method from the maximum pseudo-F-statistics criterion, and for Japan the optimal number of clusters was chosen to be 15. The Huang decomposition method into a sequence of independent empirical oscillation modes (EMD — Empirical Mode Decomposition) is applied to the time series of principal components from station blocks. To provide the stability of estimates of the waveforms of the EMD decomposition, averaging of 1000 independent additive realizations of white noise of limited amplitude was performed. Using the Cholesky decomposition of the covariance matrix of the waveforms of the first three EMD components in a sliding time window, indicators of abnormal tremor behavior were determined. By calculating the correlation function between the average indicators of anomalous behavior and the released seismic energy in the vicinity of the Japanese Islands, it was established that bursts in the measure of anomalous tremor behavior precede emissions of seismic energy. The purpose of the article is to clarify common hypotheses that movements of the earth’s crust recorded by space geodesy may contain predictive information. That displacements recorded by geodetic methods respond to the effects of earthquakes is widely known and has been demonstrated many times. But isolating geodetic effects that predict seismic events is much more challenging. In our paper, we propose one method for detecting predictive effects in space geodesy data.

  5. Terekhin A.T., Budilova E.V., Karpenko M.P., Kachalova L.M., Chmyhova E.V.
    Lyapunov function as a tool for the study of cognitive and regulatory processes in organism
    Computer Research and Modeling, 2009, v. 1, no. 4, pp. 449-456

    Cognitive and regulatory processes in organism are ensured by the functioning of several different network systems — neural, endocrine, immune, and gene ones. These systems are, however, closely related and form a single integrated neurogenohumoral cognitive-regulatory dynamic system of organism. A review of publications is given which shows that it is possible to associate with this dynamic system a corresponding Lyapunov function (energy function, potential function) and that analyzing this function allows, due to its geometrical insight, to easily discover a set of general properties of cognitive and regulatory functioning of organism.

    Views (last year): 4. Citations: 5 (RSCI).
  6. Suvorov N.V., Shleymovich M.P.
    Mathematical model of the biometric iris recognition system
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 629-639

    Automatic recognition of personal identity by biometric features is based on unique peculiarities or characteristics of people. Biometric identification process consist in making of reference templates and comparison with new input data. Iris pattern recognition algorithms presents high accuracy and low identification errors percent on practice. Iris pattern advantages over other biometric features are determined by its high degree of freedom (nearly 249), excessive density of unique features and constancy. High recognition reliability level is very important because it provides search in big databases. Unlike one-to-one check mode that is applicable only to small calculation count it allows to work in one-to-many identification mode. Every biometric identification system appears to be probabilistic and qualitative characteristics description utilizes such parameters as: recognition accuracy, false acceptance rate and false rejection rate. These characteristics allows to compare identity recognition methods and asses the system performance under any circumstances. This article explains the mathematical model of iris pattern biometric identification and its characteristics. Besides, there are analyzed results of comparison of model and real recognition process. To make such analysis there was carried out the review of existing iris pattern recognition methods based on different unique features vector. The Python-based software package is described below. It builds-up probabilistic distributions and generates large test data sets. Such data sets can be also used to educate the identification decision making neural network. Furthermore, synergy algorithm of several iris pattern identification methods was suggested to increase qualitative characteristics of system in comparison with the use of each method separately.

  7. Grebenkin I.V., Alekseenko A.E., Gaivoronskiy N.A., Ignatov M.G., Kazennov A.M., Kozakov D.V., Kulagin A.P., Kholodov Y.A.
    Ensemble building and statistical mechanics methods for MHC-peptide binding prediction
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1383-1395

    The proteins of the Major Histocompatibility Complex (MHC) play a key role in the functioning of the adaptive immune system, and the identification of peptides that bind to them is an important step in the development of vaccines and understanding the mechanisms of autoimmune diseases. Today, there are a number of methods for predicting the binding of a particular MHC allele to a peptide. One of the best such methods is NetMHCpan-4.0, which is based on an ensemble of artificial neural networks. This paper presents a methodology for qualitatively improving the underlying neural network underlying NetMHCpan-4.0. The proposed method uses the ensemble construction technique and adds as input an estimate of the Potts model taken from static mechanics, which is a generalization of the Ising model. In the general case, the model reflects the interaction of spins in the crystal lattice. Within the framework of the proposed method, the model is used to better represent the physical nature of the interaction of proteins included in the complex. To assess the interaction of the MHC + peptide complex, we use a two-dimensional Potts model with 20 states (corresponding to basic amino acids). Solving the inverse problem using data on experimentally confirmed interacting pairs, we obtain the values of the parameters of the Potts model, which we then use to evaluate a new pair of MHC + peptide, and supplement this value with the input data of the neural network. This approach, combined with the ensemble construction technique, allows for improved prediction accuracy, in terms of the positive predictive value (PPV) metric, compared to the baseline model.

  8. Bulatov A.A., Syssoev A.A., Iudin D.I.
    Simulation of lightning initiation on the basis of dynamical grap
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 125-147

    Despite numerous achievements of modern science the problem of lightning initiation in an electrodeless thundercloud, the maximum electric field strength inside which is approximately an order of magnitude lower than the dielectric strength of air, remains unsolved. Although there is no doubt that discharge activity begins with the appearance of positive streamers, which can develop under approximately half the threshold electric field as compared to negative ones, it remains unexplored how cold weakly conducting streamer systems unite in a joint hot well-conducting leader channel capable of self-propagation due to effective polarization in a relatively small external field. In this study, we present a self-organizing transport model which is applied to the case of electric discharge tree formation in a thundercloud. So, the model is aimed at numerical simulation of the initial stage of lightning discharge development. Among the innovative features of the model are the absence of grid spacing, high spatiotemporal resolution, and consideration of temporal evolution of electrical parameters of transport channels. The model takes into account the widely known asymmetry between threshold fields needed for positive and negative streamers development. In our model, the resulting well-conducting leader channel forms due to collective effect of combining the currents of tens of thousands of interacting streamer channels each of which initially has negligible conductivity and temperature that does not differ from the ambient one. The model bipolar tree is a directed graph (it has both positive and negative parts). It has morphological and electrodynamic characteristics which are intermediate between laboratory long spark and developed lightning. The model has universal character which allows to use it in other tasks related to the study of transport (in the broad sense of the word) networks.

  9. This article solves the problem of developing a technology for collecting initial data for building models for assessing the functional state of a person. This condition is assessed by the pupil response of a person to a change in illumination based on the pupillometry method. This method involves the collection and analysis of initial data (pupillograms), presented in the form of time series characterizing the dynamics of changes in the human pupils to a light impulse effect. The drawbacks of the traditional approach to the collection of initial data using the methods of computer vision and smoothing of time series are analyzed. Attention is focused on the importance of the quality of the initial data for the construction of adequate mathematical models. The need for manual marking of the iris and pupil circles is updated to improve the accuracy and quality of the initial data. The stages of the proposed technology for collecting initial data are described. An example of the obtained pupillogram is given, which has a smooth shape and does not contain outliers, noise, anomalies and missing values. Based on the presented technology, a software and hardware complex has been developed, which is a collection of special software with two main modules, and hardware implemented on the basis of a Raspberry Pi 4 Model B microcomputer, with peripheral equipment that implements the specified functionality. To evaluate the effectiveness of the developed technology, models of a single-layer perspetron and a collective of neural networks are used, for the construction of which the initial data on the functional state of intoxication of a person were used. The studies have shown that the use of manual marking of the initial data (in comparison with automatic methods of computer vision) leads to a decrease in the number of errors of the 1st and 2nd years of the kind and, accordingly, to an increase in the accuracy of assessing the functional state of a person. Thus, the presented technology for collecting initial data can be effectively used to build adequate models for assessing the functional state of a person by pupillary response to changes in illumination. The use of such models is relevant in solving individual problems of ensuring transport security, in particular, monitoring the functional state of drivers.

  10. Umavovskiy A.V.
    Data-driven simulation of a two-phase flow in heterogenous porous media
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 779-792

    The numerical methods used to simulate the evolution of hydrodynamic systems require the considerable use of computational resources thus limiting the number of possible simulations. The data-driven simulation technique is one promising approach to the development of heuristic models, which may speed up the study of such models. In this approach, machine learning methods are used to tune the weights of an artificial neural network that predicts the state of a physical system at a given point in time based on initial conditions. This article describes an original neural network architecture and a novel multi-stage training procedure which create a heuristic model of a two-phase flow in a heterogeneous porous medium. The neural network-based model predicts the states of the grid cells at an arbitrary timestep (within the known constraints), taking in only the initial conditions: the properties of the heterogeneous permeability of the medium and the location of sources and sinks. The proposed model requires orders of magnitude less processor time in comparison with the classical numerical method, which served as a criterion for evaluating the effectiveness of the trained model. The proposed architecture includes a number of subnets trained in various combinations on several datasets. The techniques of adversarial training and weight transfer are utilized.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"