All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Evaluation of the scalability property of the program for the simulation of atmospheric chemical transport by means of the simulator gem5
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 773-794In this work we have developed a new efficient program for the numerical simulation of 3D global chemical transport on an adaptive finite-difference grid which allows us to concentrate grid points in the regions where flow variables sharply change and coarsen the grid in the regions of their smooth behavior, which significantly minimizes the grid size. We represent the adaptive grid with a combination of several dynamic (tree, linked list) and static (array) data structures. The dynamic data structures are used for a grid reconstruction, and the calculations of the flow variables are based on the static data structures. The introduction of the static data structures allows us to speed up the program by a factor of 2 in comparison with the conventional approach to the grid representation with only dynamic data structures.
We wrote and tested our program on a computer with 6 CPU cores. Using the computer microarchitecture simulator gem5, we estimated the scalability property of the program on a significantly greater number of cores (up to 32), using several models of a computer system with the design “computational cores – cache – main memory”. It has been shown that the microarchitecture of a computer system has a significant impact on the scalability property, i.e. the same program demonstrates different efficiency on different computer microarchitectures. For example, we have a speedup of 14.2 on a processor with 32 cores and 2 cache levels, but we have a speedup of 22.2 on a processor with 32 cores and 3 cache levels. The execution time of a program on a computer model in gem5 is 104–105 times greater than the execution time of the same program on a real computer and equals 1.5 hours for the most complex model.
Also in this work we describe how to configure gem5 and how to perform simulations with gem5 in the most optimal way.
-
Models of phytoplankton distribution over chlorophyll in various habitat conditions. Estimation of aquatic ecosystem bioproductivity
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1177-1190A model of the phytoplankton abundance dynamics depending on changes in the content of chlorophyll in phytoplankton under the influence of changing environmental conditions is proposed. The model takes into account the dependence of biomass growth on environmental conditions, as well as on photosynthetic chlorophyll activity. The light and dark stages of photosynthesis have been identified. The processes of chlorophyll consumption during photosynthesis in the light and the growth of chlorophyll mass together with phytoplankton biomass are described. The model takes into account environmental conditions such as mineral nutrients, illumination and water temperature. The model is spatially distributed, the spatial variable corresponds to mass fraction of chlorophyll in phytoplankton. Thereby possible spreads of the chlorophyll contents in phytoplankton are taken into consideration. The model calculates the density distribution of phytoplankton by the proportion of chlorophyll in it. In addition, the rate of production of new phytoplankton biomass is calculated. In parallel, point analogs of the distributed model are considered. The diurnal and seasonal (during the year) dynamics of phytoplankton distribution by chlorophyll fraction are demonstrated. The characteristics of the rate of primary production in daily or seasonally changing environmental conditions are indicated. Model characteristics of the dynamics of phytoplankton biomass growth show that in the light this growth is about twice as large as in the dark. It shows, that illumination significantly affects the rate of production. Seasonal dynamics demonstrates an accelerated growth of biomass in spring and autumn. The spring maximum is associated with warming under the conditions of biogenic substances accumulated in winter, and the autumn, slightly smaller maximum, with the accumulation of nutrients during the summer decline in phytoplankton biomass. And the biomass in summer decreases, again due to a deficiency of nutrients. Thus, in the presence of light, mineral nutrition plays the main role in phytoplankton dynamics.
In general, the model demonstrates the dynamics of phytoplankton biomass, qualitatively similar to classical concepts, under daily and seasonal changes in the environment. The model seems to be suitable for assessing the bioproductivity of aquatic ecosystems. It can be supplemented with equations and terms of equations for a more detailed description of complex processes of photosynthesis. The introduction of variables in the physical habitat space and the conjunction of the model with satellite information on the surface of the reservoir leads to model estimates of the bioproductivity of vast marine areas. Introduction of physical space variables habitat and the interface of the model with satellite information about the surface of the basin leads to model estimates of the bioproductivity of vast marine areas.
-
Modeling the response of polycrystalline ferroelectrics to high-intensity electric and mechanical fields
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 93-113A mathematical model describing the irreversible processes of polarization and deformation of polycrystalline ferroelectrics in external electric and mechanical fields of high intensity is presented, as a result of which the internal structure changes and the properties of the material change. Irreversible phenomena are modeled in a three-dimensional setting for the case of simultaneous action of an electric field and mechanical stresses. The object of the research is a representative volume in which the residual phenomena in the form of the induced and irreversible parts of the polarization vector and the strain tensor are investigated. The main task of modeling is to construct constitutive relations connecting the polarization vector and strain tensor, on the one hand, and the electric field vector and mechanical stress tensor, on the other hand. A general case is considered when the direction of the electric field may not coincide with any of the main directions of the tensor of mechanical stresses. For reversible components, the constitutive relations are constructed in the form of linear tensor equations, in which the modules of elasticity and dielectric permeability depend on the residual strain, and the piezoelectric modules depend on the residual polarization. The constitutive relations for irreversible parts are constructed in several stages. First, an auxiliary model was constructed for the ideal or unhysteretic case, when all vectors of spontaneous polarization can rotate in the fields of external forces without mutual influence on each other. A numerical method is proposed for calculating the resulting values of the maximum possible polarization and deformation values of an ideal case in the form of surface integrals over the unit sphere with the distribution density obtained from the statistical Boltzmann law. After that the estimates of the energy costs required for breaking down the mechanisms holding the domain walls are made, and the work of external fields in real and ideal cases is calculated. On the basis of this, the energy balance was derived and the constitutive relations for irreversible components in the form of equations in differentials were obtained. A scheme for the numerical solution of these equations has been developed to determine the current values of the irreversible required characteristics in the given electrical and mechanical fields. For cyclic loads, dielectric, deformation and piezoelectric hysteresis curves are plotted.
The developed model can be implanted into a finite element complex for calculating inhomogeneous residual polarization and deformation fields with subsequent determination of the physical modules of inhomogeneously polarized ceramics as a locally anisotropic body.
-
Computer analysis of the bone regeneration strength in a model system of osteosynthesis by the Ilizarov fixator with static loads
Computer Research and Modeling, 2014, v. 6, no. 3, pp. 427-440Views (last year): 3.The adequate complexity three-dimensional finite element model of biomechanical system with space, shell and beam-type elements was built. The model includes the Ilizarov fixator and tibial bone’s simulator with the regenerating tissue at the fracture location. The proposed model allows us to specify the orthotropic elastic properties of tibial bone model in cortical and trabecular zones. It is also possible to change the basic geometrical and mechanical characteristics of biomechanical system, change the finite element mash density and define the different external loads, such as pressure on the bone and compression or distraction between the repositioned rings of Ilizarov device.
By using special APDL ANSYS program macros the mode of deformation was calculated in the fracture zone for various static loads on the simulator bone, for compression or distraction between the repositioned rings and for various mechanical properties during different stages of the bone regenerate formation (gelatinous, cartilaginous, trabecular and cortical bone remodeling). The obtained results allow us to estimate the permissible values of the external pressure on the bone and of the displacements of the Ilizarov fixator rings for different stages of the bone regeneration, based on the admittance criterion for the maximum of the stresses in the callus. The presented data can be used in a clinical condition for planning, realization and monitoring of the power modes for transosseous osteosynthesis with the external Ilizarov fixator.
-
Survey of convex optimization of Markov decision processes
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 329-353This article reviews both historical achievements and modern results in the field of Markov Decision Process (MDP) and convex optimization. This review is the first attempt to cover the field of reinforcement learning in Russian in the context of convex optimization. The fundamental Bellman equation and the criteria of optimality of policy — strategies based on it, which make decisions based on the known state of the environment at the moment, are considered. The main iterative algorithms of policy optimization based on the solution of the Bellman equations are also considered. An important section of this article was the consideration of an alternative to the $Q$-learning approach — the method of direct maximization of the agent’s average reward for the chosen strategy from interaction with the environment. Thus, the solution of this convex optimization problem can be represented as a linear programming problem. The paper demonstrates how the convex optimization apparatus is used to solve the problem of Reinforcement Learning (RL). In particular, it is shown how the concept of strong duality allows us to naturally modify the formulation of the RL problem, showing the equivalence between maximizing the agent’s reward and finding his optimal strategy. The paper also discusses the complexity of MDP optimization with respect to the number of state–action–reward triples obtained as a result of interaction with the environment. The optimal limits of the MDP solution complexity are presented in the case of an ergodic process with an infinite horizon, as well as in the case of a non-stationary process with a finite horizon, which can be restarted several times in a row or immediately run in parallel in several threads. The review also reviews the latest results on reducing the gap between the lower and upper estimates of the complexity of MDP optimization with average remuneration (Averaged MDP, AMDP). In conclusion, the real-valued parametrization of agent policy and a class of gradient optimization methods through maximizing the $Q$-function of value are considered. In particular, a special class of MDPs with restrictions on the value of policy (Constrained Markov Decision Process, CMDP) is presented, for which a general direct-dual approach to optimization with strong duality is proposed.
-
Multifractal and entropy statistics of seismic noise in Kamchatka in connection with the strongest earthquakes
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1507-1521The study of the properties of seismic noise in Kamchatka is based on the idea that noise is an important source of information about the processes preceding strong earthquakes. The hypothesis is considered that an increase in seismic hazard is accompanied by a simplification of the statistical structure of seismic noise and an increase in spatial correlations of its properties. The entropy of the distribution of squared wavelet coefficients, the width of the carrier of the multifractal singularity spectrum, and the Donoho – Johnstone index were used as statistics characterizing noise. The values of these parameters reflect the complexity: if a random signal is close in its properties to white noise, then the entropy is maximum, and the other two parameters are minimum. The statistics used are calculated for 6 station clusters. For each station cluster, daily median noise properties are calculated in successive 1-day time windows, resulting in an 18-dimensional (3 properties and 6 station clusters) time series of properties. To highlight the general properties of changes in noise parameters, a principal component method is used, which is applied for each cluster of stations, as a result of which the information is compressed into a 6-dimensional daily time series of principal components. Spatial noise coherences are estimated as a set of maximum pairwise quadratic coherence spectra between the principal components of station clusters in a sliding time window of 365 days. By calculating histograms of the distribution of cluster numbers in which the minimum and maximum values of noise statistics are achieved in a sliding time window of 365 days in length, the migration of seismic hazard areas was assessed in comparison with strong earthquakes with a magnitude of at least 7.
-
Traffic flow speed prediction on transportation graph with convolutional neural networks
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 359-367Views (last year): 36.The short-term prediction of road traffic condition is one of the main tasks of transportation modelling. The main purpose of which are traffic control, reporting of accidents, avoiding traffic jams due to knowledge of traffic flow and subsequent transportation planning. A number of solutions exist — both model-driven and data driven had proven to be successful in capturing the dynamics of traffic flow. Nevertheless, most space-time models suffer from high mathematical complexity and low efficiency. Artificial Neural Networks, one of the prominent datadriven approaches, show promising performance in modelling the complexity of traffic flow. We present a neural network architecture for traffic flow prediction on a real-world road network graph. The model is based on the combination of a recurrent neural network and graph convolutional neural network. Where a recurrent neural network is used to model temporal dependencies, and a convolutional neural network is responsible for extracting spatial features from traffic. To make multiple few steps ahead predictions, the encoder-decoder architecture is used, which allows to reduce noise propagation due to inexact predictions. To model the complexity of traffic flow, we employ multilayered architecture. Deeper neural networks are more difficult to train. To speed up the training process, we use skip-connections between each layer, so that each layer teaches only the residual function with respect to the previous layer outputs. The resulting neural network was trained on raw data from traffic flow detectors from the US highway system with a resolution of 5 minutes. 3 metrics: mean absolute error, mean relative error, mean-square error were used to estimate the quality of the prediction. It was found that for all metrics the proposed model achieved lower prediction error than previously published models, such as Vector Auto Regression, LSTM and Graph Convolution GRU.
-
Methods and problems in the kinetic approach for simulating biological structures
Computer Research and Modeling, 2018, v. 10, no. 6, pp. 851-866Views (last year): 31.The biological structure is considered as an open nonequilibrium system which properties can be described on the basis of kinetic equations. New problems with nonequilibrium boundary conditions are introduced. The nonequilibrium distribution tends gradually to an equilibrium state. The region of spatial inhomogeneity has a scale depending on the rate of mass transfer in the open system and the characteristic time of metabolism. In the proposed approximation, the internal energy of the motion of molecules is much less than the energy of translational motion. Or in other terms we can state that the kinetic energy of the average blood velocity is substantially higher than the energy of chaotic motion of the same particles. We state that the relaxation problem models a living system. The flow of entropy to the system decreases in downstream, this corresponds to Shrödinger’s general ideas that the living system “feeds on” negentropy. We introduce a quantity that determines the complexity of the biosystem, more precisely, this is the difference between the nonequilibrium kinetic entropy and the equilibrium entropy at each spatial point integrated over the entire spatial region. Solutions to the problems of spatial relaxation allow us to estimate the size of biosystems as regions of nonequilibrium. The results are compared with empirical data, in particular, for mammals we conclude that the larger the size of animals, the smaller the specific energy of metabolism. This feature is reproduced in our model since the span of the nonequilibrium region is larger in the system where the reaction rate is shorter, or in terms of the kinetic approach, the longer the relaxation time of the interaction between the molecules. The approach is also used for estimation of a part of a living system, namely a green leaf. The problems of aging as degradation of an open nonequilibrium system are considered. The analogy is related to the structure, namely, for a closed system, the equilibrium of the structure is attained for the same molecules while in the open system, a transition occurs to the equilibrium of different particles, which change due to metabolism. Two essentially different time scales are distinguished, the ratio of which is approximately constant for various animal species. Under the assumption of the existence of these two time scales the kinetic equation splits in two equations, describing the metabolic (stationary) and “degradative” (nonstationary) parts of the process.
-
Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327Views (last year): 16.The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.
-
Ensemble building and statistical mechanics methods for MHC-peptide binding prediction
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1383-1395The proteins of the Major Histocompatibility Complex (MHC) play a key role in the functioning of the adaptive immune system, and the identification of peptides that bind to them is an important step in the development of vaccines and understanding the mechanisms of autoimmune diseases. Today, there are a number of methods for predicting the binding of a particular MHC allele to a peptide. One of the best such methods is NetMHCpan-4.0, which is based on an ensemble of artificial neural networks. This paper presents a methodology for qualitatively improving the underlying neural network underlying NetMHCpan-4.0. The proposed method uses the ensemble construction technique and adds as input an estimate of the Potts model taken from static mechanics, which is a generalization of the Ising model. In the general case, the model reflects the interaction of spins in the crystal lattice. Within the framework of the proposed method, the model is used to better represent the physical nature of the interaction of proteins included in the complex. To assess the interaction of the MHC + peptide complex, we use a two-dimensional Potts model with 20 states (corresponding to basic amino acids). Solving the inverse problem using data on experimentally confirmed interacting pairs, we obtain the values of the parameters of the Potts model, which we then use to evaluate a new pair of MHC + peptide, and supplement this value with the input data of the neural network. This approach, combined with the ensemble construction technique, allows for improved prediction accuracy, in terms of the positive predictive value (PPV) metric, compared to the baseline model.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"