All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Prediction of moving and unexpected motionless bottlenecks based on three-phase traffic theory
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 319-363We present a simulation methodology for the prediction of ЃgunexpectedЃh bottlenecks, i.e., the bottlenecks that occur suddenly and unexpectedly for drivers on a highway. Such unexpected bottlenecks can be either a moving bottleneck (MB) caused by a slow moving vehicle or a motionless bottleneck caused by a stopped vehicle (SV). Based on simulations of a stochastic microscopic traffic flow model in the framework of KernerЃfs three-phase traffic theory, we show that through the use of a small share of probe vehicles (FCD) randomly distributed in traffic flow the reliable prediction of ЃgunexpectedЃh bottlenecks is possible. We have found that the time dependence of the probability of MB and SV prediction as well as the accuracy of the estimation of MB and SV location depend considerably on sequences of phase transitions from free flow (F) to synchronized flow (S) (F→S transition) and back from synchronized flow to free flow (S→F transition) as well as on speed oscillations in synchronized flow at the bottleneck. In the simulation approach, the identification of F→S and S→F transitions at an unexpected bottleneck has been made in accordance with Kerner's three-phase traffic theory. The presented simulation methodology allows us both the prediction of the unexpected bottleneck that suddenly occurs on a highway and the distinguishing of the origin of the unexpected bottleneck, i.e., whether the unexpected bottleneck has occurred due to a MB or a SV.
-
Comparison of mobile operating systems based on models of growth reliability of the software
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 325-334Views (last year): 29.Evaluation of software reliability is an important part of the process of developing modern software. Many studies are aimed at improving models for measuring and predicting the reliability of software products. However, little attention is paid to approaches to comparing existing systems in terms of software reliability. Despite the enormous importance for practice (and for managing software development), a complete and proven comparison methodology does not exist. In this article, we propose a software reliability comparison methodology in which software reliability growth models are widely used. The proposed methodology has the following features: it provides certain level of flexibility and abstraction while keeping objectivity, i.e. providing measurable comparison criteria. Also, given the comparison methodology with a set of SRGMs and evaluation criteria it becomes much easier to disseminate information about reliability of wide range of software systems. The methodology was evaluated on the example of three mobile operating systems with open source: Sailfish, Tizen, CyanogenMod.
A byproduct of our study is a comparison of the three analyzed Open Source mobile operating systems. The goal of this research is to determine which OS is stronger in terms of reliability. To this end we have performed a GQM analysis and we have identified 3 questions and 8 metrics. Considering the comparison of metrics, it appears that Sailfish is in most case the best performing OS. However, it is also the OS that performs the worst in most cases. On the contrary, Tizen scores the best in 3 cases out of 8, but the worst only in one case out of 8.
-
Algorithms of through calculation for damage processes
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 645-666Views (last year): 24.The paper reviews the existing approaches to calculating the destruction of solids. The main attention is paid to algorithms using a unified approach to the calculation of deformation both for nondestructive and for the destroyed states of the material. The thermodynamic derivation of the unified rheological relationships taking into account the elastic, viscous and plastic properties of materials and describing the loss of the deformation resistance ability with the accumulation of microdamages is presented. It is shown that the mathematical model under consideration provides a continuous dependence of the solution on input parameters (parameters of the material medium, initial and boundary conditions, discretization parameters) with softening of the material.
Explicit and implicit non-matrix algorithms for calculating the evolution of deformation and fracture development are presented. Non-explicit schemes are implemented using iterations of the conjugate gradient method, with the calculation of each iteration exactly coinciding with the calculation of the time step for two-layer explicit schemes. So, the solution algorithms are very simple.
The results of solving typical problems of destruction of solid deformable bodies for slow (quasistatic) and fast (dynamic) deformation processes are presented. Based on the experience of calculations, recommendations are given for modeling the processes of destruction and ensuring the reliability of numerical solutions.
-
A multilayer neural network for determination of particle size distribution in Dynamic Light Scattering problem
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 265-273Views (last year): 16.Solution of Dynamic Light Scattering problem makes it possible to determine particle size distribution (PSD) from the spectrum of the intensity of scattered light. As a result of experiment, an intensity curve is obtained. The experimentally obtained spectrum of intensity is compared with the theoretically expected spectrum, which is the Lorentzian line. The main task is to determine on the basis of these data the relative concentrations of particles of each class presented in the solution. The article presents a method for constructing and using a neural network trained on synthetic data to determine PSD in a solution in the range of 1–500 nm. The neural network has a fully connected layer of 60 neurons with the RELU activation function at the output, a layer of 45 neurons and the same activation function, a dropout layer and 2 layers with 15 and 1 neurons (network output). The article describes how the network has been trained and tested on synthetic and experimental data. On the synthetic data, the standard deviation metric (rmse) gave a value of 1.3157 nm. Experimental data were obtained for particle sizes of 200 nm, 400 nm and a solution with representatives of both sizes. The results of the neural network and the classical linear methods are compared. The disadvantages of the classical methods are that it is difficult to determine the degree of regularization: too much regularization leads to the particle size distribution curves are much smoothed out, and weak regularization gives oscillating curves and low reliability of the results. The paper shows that the neural network gives a good prediction for particles with a large size. For small sizes, the prediction is worse, but the error quickly decreases as the particle size increases.
-
On numerical solution of joint inverse geophysical problems with structural constraints
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 329-343Inverse geophysical problems are difficult to solve due to their mathematically incorrect formulation and large computational complexity. Geophysical exploration in frontier areas is even more complicated due to the lack of reliable geological information. In this case, inversion methods that allow interpretation of several types of geophysical data together are recognized to be of major importance. This paper is dedicated to one of such inversion methods, which is based on minimization of the determinant of the Gram matrix for a set of model vectors. Within the framework of this approach, we minimize a nonlinear functional, which consists of squared norms of data residual of different types, the sum of stabilizing functionals and a term that measures the structural similarity between different model vectors. We apply this approach to seismic and electromagnetic synthetic data set. Specifically, we study joint inversion of acoustic pressure response together with controlled-source electrical field imposing structural constraints on resulting electrical conductivity and P-wave velocity distributions.
We start off this note with the problem formulation and present the numerical method for inverse problem. We implemented the conjugate-gradient algorithm for non-linear optimization. The efficiency of our approach is demonstrated in numerical experiments, in which the true 3D electrical conductivity model was assumed to be known, but the velocity model was constructed during inversion of seismic data. The true velocity model was based on a simplified geology structure of a marine prospect. Synthetic seismic data was used as an input for our minimization algorithm. The resulting velocity model not only fit to the data but also has structural similarity with the given conductivity model. Our tests have shown that optimally chosen weight of the Gramian term may improve resolution of the final models considerably.
-
Game-theoretic model of coordinations of interests at innovative development of corporations
Computer Research and Modeling, 2016, v. 8, no. 4, pp. 673-684Views (last year): 9. Citations: 6 (RSCI).Dynamic game theoretic models of the corporative innovative development are investigated. The proposed models are based on concordance of private and public interests of agents. It is supposed that the structure of interests of each agent includes both private (personal interests) and public (interests of the whole company connected with its innovative development first) components. The agents allocate their personal resources between these two directions. The system dynamics is described by a difference (not differential) equation. The proposed model of innovative development is studied by simulation and the method of enumeration of the domains of feasible controls with a constant step. The main contribution of the paper consists in comparative analysis of efficiency of the methods of hierarchical control (compulsion or impulsion) for information structures of Stackelberg or Germeier (four structures) by means of the indices of system compatibility. The proposed model is a universal one and can be used for a scientifically grounded support of the programs of innovative development of any economic firm. The features of a specific company are considered in the process of model identification (a determination of the specific classes of model functions and numerical values of its parameters) which forms a separate complex problem and requires an analysis of the statistical data and expert estimations. The following assumptions about information rules of the hierarchical game are accepted: all players use open-loop strategies; the leader chooses and reports to the followers some values of administrative (compulsion) or economic (impulsion) control variables which can be only functions of time (Stackelberg games) or depend also on the followers’ controls (Germeier games); given the leader’s strategies all followers simultaneously and independently choose their strategies that gives a Nash equilibrium in the followers’ game. For a finite number of iterations the proposed algorithm of simulation modeling allows to build an approximate solution of the model or to conclude that it doesn’t exist. A reliability and efficiency of the proposed algorithm follow from the properties of the scenario method and the method of a direct ordered enumeration with a constant step. Some comprehensive conclusions about the comparative efficiency of methods of hierarchical control of innovations are received.
-
Numerical simulation of ice accretion in FlowVision software
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 83-96Certifying a transport airplane for the flights under icing conditions requires calculations aimed at definition of the dimensions and shapes of the ice bodies formed on the airplane surfaces. Up to date, software developed in Russia for simulation of ice accretion, which would be authorized by Russian certifying supervisory authority, is absent. This paper describes methodology IceVision recently developed in Russia on the basis of software FlowVision for calculations of ice accretion on airplane surfaces.
The main difference of methodology IceVision from the other approaches, known from literature, consists in using technology Volume Of Fluid (VOF — volume of fluid in cell) for tracking the surface of growing ice body. The methodology assumes solving a time-depended problem of continuous grows of ice body in the Euler formulation. The ice is explicitly present in the computational domain. The energy equation is integrated inside the ice body. In the other approaches, changing the ice shape is taken into account by means of modifying the aerodynamic surface and using Lagrangian mesh. In doing so, the heat transfer into ice is allowed for by an empirical model.
The implemented mathematical model provides capability to simulate formation of rime (dry) and glaze (wet) ice. It automatically identifies zones of rime and glaze ice. In a rime (dry) ice zone, the temperature of the contact surface between air and ice is calculated with account of ice sublimation and heat conduction inside the ice. In a glaze (wet) ice zone, the flow of the water film over the ice surface is allowed for. The film freezes due to evaporation and heat transfer inside the air and the ice. Methodology IceVision allows for separation of the film. For simulation of the two-phase flow of the air and droplets, a multi-speed model is used within the Euler approach. Methodology IceVision allows for size distribution of droplets. The computational algorithm takes account of essentially different time scales for the physical processes proceeding in the course of ice accretion, viz., air-droplets flow, water flow, and ice growth. Numerical solutions of validation test problems demonstrate efficiency of methodology IceVision and reliability of FlowVision results.
-
Permeability of lipid membranes. A molecular dynamic study
Computer Research and Modeling, 2009, v. 1, no. 4, pp. 423-436Views (last year): 20. Citations: 2 (RSCI).A correct model of lipid molecule (distearoylphosphatidylcholine, DSPC) and lipid membrane in water was constructed. Model lipid membrane is stable and has a reliable energy distribution among degrees of freedom. Also after equilibration model system has spatial parameters very similar to those of real DSPC membrane in liquid-crystalline phase. This model was used for studying of lipid membrane permeability to oxygen and water molecules and sodium ion. We obtained the values for transmembrane mobility and diffusion coefficients profiles, which we used for effective permeability coefficients calculation. We found lipid membranes to have significant diffusional resistance to penetration not only by charged particles, such as ions, but also by nonpolar molecules, such as oxygen molecule. We propose theoretical approach for calculation of particle flow across a membrane, as well as methods for estimation of distribution coefficients between bilayer and water phase.
-
Efficient Pseudorandom number generators for biomolecular simulations on graphics processors
Computer Research and Modeling, 2011, v. 3, no. 3, pp. 287-308Views (last year): 11. Citations: 2 (RSCI).Langevin Dynamics, Monte Carlo, and all-atom Molecular Dynamics simulations in implicit solvent require a reliable source of pseudorandom numbers generated at each step of calculation. We present the two main approaches for implementation of pseudorandom number generators on a GPU. In the first approach, inherent in CPU-based calculations, one PRNG produces a stream of pseudorandom numbers in each thread of execution, whereas the second approach builds on the ability of different threads to communicate, thus, sharing random seeds across the entire device. We exemplify the use of these approaches through the development of Ran2, Hybrid Taus, and Lagged Fibonacci algorithms. As an application-based test of randomness, we carry out LD simulations of N independent harmonic oscillators coupled to a stochastic thermostat. This model allows us to assess statistical quality of pseudorandom numbers. We also profile performance of these generators in terms of the computational time, memory usage, and the speedup factor (CPU/GPU time).
-
Numerical simulation of combustion of a polydisperse suspension of coal dust in a spherical volume
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 531-539Views (last year): 2. Citations: 7 (RSCI).The physical and mathematical model of combustion of the polydisperse suspension of coal dust was developed. The formulation of the problem takes into account the evaporation of particle volatile components during the heating, the particle emitting and the gas heat transfer to a surrounding area via the sphere volume side surface, heat transfer coefficient as a function of temperature. The polydisperse of coal-dust is taken into consideration. N — the number of fraction. Fractions are subdivided into inert and reacting particles. The oxidizer mass balance equation takes into consideration the oxidizer consumption per each reaction (heterogeneous on the particle surface and homogenous in the gas). Exothermic chemical reactions in gas are determined by Arrhenius equation with second-order kinetics. The heterogeneous reaction on the particle surface was first-order reaction. The numerical simulation was solved by Runge–Kutta–Merson method. Reliability of the calculations was verified by solving the partial problems. During the numerical calculation the percentage composition of inert and reacting particles in coal-dust and their total mass were changed for each simulation. We have determined the influence of the percentage composition of inert and reacting particles on burning characteristics of polydisperse coal-dust methane-air mixture. The results showed that the percent increase of volatile components in the mixture lead to the increase of total pressure in the volume. The value of total pressure decreases with the increasing of the inert components in the mixture. It has been determined that there is the extremism radius value of coarse particles by which the maximum pressure reaches the highest value.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"