All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Multifractal and entropy statistics of seismic noise in Kamchatka in connection with the strongest earthquakes
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1507-1521The study of the properties of seismic noise in Kamchatka is based on the idea that noise is an important source of information about the processes preceding strong earthquakes. The hypothesis is considered that an increase in seismic hazard is accompanied by a simplification of the statistical structure of seismic noise and an increase in spatial correlations of its properties. The entropy of the distribution of squared wavelet coefficients, the width of the carrier of the multifractal singularity spectrum, and the Donoho – Johnstone index were used as statistics characterizing noise. The values of these parameters reflect the complexity: if a random signal is close in its properties to white noise, then the entropy is maximum, and the other two parameters are minimum. The statistics used are calculated for 6 station clusters. For each station cluster, daily median noise properties are calculated in successive 1-day time windows, resulting in an 18-dimensional (3 properties and 6 station clusters) time series of properties. To highlight the general properties of changes in noise parameters, a principal component method is used, which is applied for each cluster of stations, as a result of which the information is compressed into a 6-dimensional daily time series of principal components. Spatial noise coherences are estimated as a set of maximum pairwise quadratic coherence spectra between the principal components of station clusters in a sliding time window of 365 days. By calculating histograms of the distribution of cluster numbers in which the minimum and maximum values of noise statistics are achieved in a sliding time window of 365 days in length, the migration of seismic hazard areas was assessed in comparison with strong earthquakes with a magnitude of at least 7.
-
Traffic flow speed prediction on transportation graph with convolutional neural networks
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 359-367Views (last year): 36.The short-term prediction of road traffic condition is one of the main tasks of transportation modelling. The main purpose of which are traffic control, reporting of accidents, avoiding traffic jams due to knowledge of traffic flow and subsequent transportation planning. A number of solutions exist — both model-driven and data driven had proven to be successful in capturing the dynamics of traffic flow. Nevertheless, most space-time models suffer from high mathematical complexity and low efficiency. Artificial Neural Networks, one of the prominent datadriven approaches, show promising performance in modelling the complexity of traffic flow. We present a neural network architecture for traffic flow prediction on a real-world road network graph. The model is based on the combination of a recurrent neural network and graph convolutional neural network. Where a recurrent neural network is used to model temporal dependencies, and a convolutional neural network is responsible for extracting spatial features from traffic. To make multiple few steps ahead predictions, the encoder-decoder architecture is used, which allows to reduce noise propagation due to inexact predictions. To model the complexity of traffic flow, we employ multilayered architecture. Deeper neural networks are more difficult to train. To speed up the training process, we use skip-connections between each layer, so that each layer teaches only the residual function with respect to the previous layer outputs. The resulting neural network was trained on raw data from traffic flow detectors from the US highway system with a resolution of 5 minutes. 3 metrics: mean absolute error, mean relative error, mean-square error were used to estimate the quality of the prediction. It was found that for all metrics the proposed model achieved lower prediction error than previously published models, such as Vector Auto Regression, LSTM and Graph Convolution GRU.
-
Methods and problems in the kinetic approach for simulating biological structures
Computer Research and Modeling, 2018, v. 10, no. 6, pp. 851-866Views (last year): 31.The biological structure is considered as an open nonequilibrium system which properties can be described on the basis of kinetic equations. New problems with nonequilibrium boundary conditions are introduced. The nonequilibrium distribution tends gradually to an equilibrium state. The region of spatial inhomogeneity has a scale depending on the rate of mass transfer in the open system and the characteristic time of metabolism. In the proposed approximation, the internal energy of the motion of molecules is much less than the energy of translational motion. Or in other terms we can state that the kinetic energy of the average blood velocity is substantially higher than the energy of chaotic motion of the same particles. We state that the relaxation problem models a living system. The flow of entropy to the system decreases in downstream, this corresponds to Shrödinger’s general ideas that the living system “feeds on” negentropy. We introduce a quantity that determines the complexity of the biosystem, more precisely, this is the difference between the nonequilibrium kinetic entropy and the equilibrium entropy at each spatial point integrated over the entire spatial region. Solutions to the problems of spatial relaxation allow us to estimate the size of biosystems as regions of nonequilibrium. The results are compared with empirical data, in particular, for mammals we conclude that the larger the size of animals, the smaller the specific energy of metabolism. This feature is reproduced in our model since the span of the nonequilibrium region is larger in the system where the reaction rate is shorter, or in terms of the kinetic approach, the longer the relaxation time of the interaction between the molecules. The approach is also used for estimation of a part of a living system, namely a green leaf. The problems of aging as degradation of an open nonequilibrium system are considered. The analogy is related to the structure, namely, for a closed system, the equilibrium of the structure is attained for the same molecules while in the open system, a transition occurs to the equilibrium of different particles, which change due to metabolism. Two essentially different time scales are distinguished, the ratio of which is approximately constant for various animal species. Under the assumption of the existence of these two time scales the kinetic equation splits in two equations, describing the metabolic (stationary) and “degradative” (nonstationary) parts of the process.
-
Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327Views (last year): 16.The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.
-
Ensemble building and statistical mechanics methods for MHC-peptide binding prediction
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1383-1395The proteins of the Major Histocompatibility Complex (MHC) play a key role in the functioning of the adaptive immune system, and the identification of peptides that bind to them is an important step in the development of vaccines and understanding the mechanisms of autoimmune diseases. Today, there are a number of methods for predicting the binding of a particular MHC allele to a peptide. One of the best such methods is NetMHCpan-4.0, which is based on an ensemble of artificial neural networks. This paper presents a methodology for qualitatively improving the underlying neural network underlying NetMHCpan-4.0. The proposed method uses the ensemble construction technique and adds as input an estimate of the Potts model taken from static mechanics, which is a generalization of the Ising model. In the general case, the model reflects the interaction of spins in the crystal lattice. Within the framework of the proposed method, the model is used to better represent the physical nature of the interaction of proteins included in the complex. To assess the interaction of the MHC + peptide complex, we use a two-dimensional Potts model with 20 states (corresponding to basic amino acids). Solving the inverse problem using data on experimentally confirmed interacting pairs, we obtain the values of the parameters of the Potts model, which we then use to evaluate a new pair of MHC + peptide, and supplement this value with the input data of the neural network. This approach, combined with the ensemble construction technique, allows for improved prediction accuracy, in terms of the positive predictive value (PPV) metric, compared to the baseline model.
-
Analysing the impact of migration on background social strain using a continuous social stratification model
Computer Research and Modeling, 2022, v. 14, no. 3, pp. 661-673The background social strain of a society can be quantitatively estimated using various statistical indicators. Mathematical models, allowing to forecast the dynamics of social strain, are successful in describing various social processes. If the number of interacting groups is small, the dynamics of the corresponding indicators can be modelled with a system of ordinary differential equations. The increase in the number of interacting components leads to the growth of complexity, which makes the analysis of such models a challenging task. A continuous social stratification model can be considered as a result of the transition from a discrete number of interacting social groups to their continuous distribution in some finite interval. In such a model, social strain naturally spreads locally between neighbouring groups, while in reality, the social elite influences the whole society via news media, and the Internet allows non-local interaction between social groups. These factors, however, can be taken into account to some extent using the term of the model, describing negative external influence on the society. In this paper, we develop a continuous social stratification model, describing the dynamics of two societies connected through migration. We assume that people migrate from the social group of donor society with the highest strain level to poorer social layers of the acceptor society, transferring the social strain at the same time. We assume that all model parameters are constants, which is a realistic assumption for small societies only. By using the finite volume method, we construct the spatial discretization for the problem, capable of reproducing finite propagation speed of social strain. We verify the discretization by comparing the results of numerical simulations with the exact solutions of the auxiliary non-linear diffusion equation. We perform the numerical analysis of the proposed model for different values of model parameters, study the impact of migration intensity on the stability of acceptor society, and find the destabilization conditions. The results, obtained in this work, can be used in further analysis of the model in the more realistic case of inhomogeneous coefficients.
-
The use of cluster analysis methods for the study of a set of feasible solutions of the phase problem in biological crystallography
Computer Research and Modeling, 2010, v. 2, no. 1, pp. 91-101Views (last year): 2.X-ray diffraction experiment allows determining of magnitudes of complex coefficients in the decomposition of the studied electron density distribution into Fourier series. The determination of the lost in the experiment phase values poses the central problem of the method, namely the phase problem. Some methods for solving of the phase problem result in a set of feasible solutions. Cluster analysis method may be used to investigate the composition of this set and to extract one or several typical solutions. An essential feature of the approach is the estimation of the closeness of two solutions by the map correlation between two aligned Fourier syntheses calculated with the use of phase sets under comparison. An interactive computer program ClanGR was designed to perform this analysis.
-
High-throughput identification of hydride phase-change kinetics models
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 171-183Metal hydrides are an interesting class of chemical compounds that can reversibly bind a large amount of hydrogen and are, therefore, of interest for energy applications. Understanding the factors affecting the kinetics of hydride formation and decomposition is especially important. Features of the material, experimental setup and conditions affect the mathematical description of the processes, which can undergo significant changes during the processing of experimental data. The article proposes a general approach to numerical modeling of the formation and decomposition of metal hydrides and solving inverse problems of estimating material parameters from measurement data. The models are divided into two classes: diffusive ones, that take into account the gradient of hydrogen concentration in the metal lattice, and models with fast diffusion. The former are more complex and take the form of non-classical boundary value problems of parabolic type. A rather general approach to the grid solution of such problems is described. The second ones are solved relatively simply, but can change greatly when model assumptions change. Our experience in processing experimental data shows that a flexible software tool is needed; a tool that allows, on the one hand, building models from standard blocks, freely changing them if necessary, and, on the other hand, avoiding the implementation of routine algorithms. It also should be adapted for high-performance systems of different paradigms. These conditions are satisfied by the HIMICOS library presented in the paper, which has been tested on a large number of experimental data. It allows simulating the kinetics of formation and decomposition of metal hydrides, as well as related tasks, at three levels of abstraction. At the low level, the user defines the interface procedures, such as calculating the time layer based on the previous layer or the entire history, calculating the observed value and the independent variable from the task variables, comparing the curve with the reference. Special algorithms can be used for solving quite general parabolic-type boundary value problems with free boundaries and with various quasilinear (i.e., linear with respect to the derivative only) boundary conditions, as well as calculating the distance between the curves in different metric spaces and with different normalization. This is the middle level of abstraction. At the high level, it is enough to choose a ready tested model for a particular material and modify it in relation to the experimental conditions.
-
Computer model development for a verified computational experiment to restore the parameters of bodies with arbitrary shape and dielectric properties
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1555-1571The creation of a virtual laboratory stand that allows one to obtain reliable characteristics that can be proven as actual, taking into account errors and noises (which is the main distinguishing feature of a computational experiment from model studies) is one of the main problems of this work. It considers the following task: there is a rectangular waveguide in the single operating mode, on the wide wall of which a technological hole is cut, through which a sample for research is placed into the cavity of the transmission line. The recovery algorithm is as follows: the laboratory measures the network parameters (S11 and/or S21) in the transmission line with the sample. In the computer model of the laboratory stand, the sample geometry is reconstructed and an iterative process of optimization (or sweeping) of the electrophysical parameters is started, the mask of this process is the experimental data, and the stop criterion is the interpretive estimate of proximity (or residual). It is important to note that the developed computer model, along with its apparent simplicity, is initially ill-conditioned. To set up a computational experiment, the Comsol modeling environment is used. The results of the computational experiment with a good degree of accuracy coincided with the results of laboratory studies. Thus, experimental verification was carried out for several significant components, both the computer model in particular and the algorithm for restoring the target parameters in general. It is important to note that the computer model developed and described in this work may be effectively used for a computational experiment to restore the full dielectric parameters of a complex geometry target. Weak bianisotropy effects can also be detected, including chirality, gyrotropy, and material nonreciprocity. The resulting model is, by definition, incomplete, but its completeness is the highest of the considered options, while at the same time, the resulting model is well conditioned. Particular attention in this work is paid to the modeling of a coaxial-waveguide transition, it is shown that the use of a discrete-element approach is preferable to the direct modeling of the geometry of a microwave device.
-
Dynamic regimes of the stochastic “prey – predatory” model with competition and saturation
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 515-531Views (last year): 28.We consider “predator – prey” model taking into account the competition of prey, predator for different from the prey resources, and their interaction described by the second type Holling trophic function. An analysis of the attractors is carried out depending on the coefficient of competition of predators. In the deterministic case, this model demonstrates the complex behavior associated with the local (Andronov –Hopf and saddlenode) and global (birth of a cycle from a separatrix loop) bifurcations. An important feature of this model is the disappearance of a stable cycle due to a saddle-node bifurcation. As a result of the presence of competition in both populations, parametric zones of mono- and bistability are observed. In parametric zones of bistability the system has either coexisting two equilibria or a cycle and equilibrium. Here, we investigate the geometrical arrangement of attractors and separatrices, which is the boundary of basins of attraction. Such a study is an important component in understanding of stochastic phenomena. In this model, the combination of the nonlinearity and random perturbations leads to the appearance of new phenomena with no analogues in the deterministic case, such as noise-induced transitions through the separatrix, stochastic excitability, and generation of mixed-mode oscillations. For the parametric study of these phenomena, we use the stochastic sensitivity function technique and the confidence domain method. In the bistability zones, we study the deformations of the equilibrium or oscillation regimes under stochastic perturbation. The geometric criterion for the occurrence of such qualitative changes is the intersection of confidence domains and the separatrix of the deterministic model. In the zone of monostability, we evolve the phenomena of explosive change in the size of population as well as extinction of one or both populations with minor changes in external conditions. With the help of the confidence domains method, we solve the problem of estimating the proximity of a stochastic population to dangerous boundaries, upon reaching which the coexistence of populations is destroyed and their extinction is observed.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




