All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Comparison of mobile operating systems based on models of growth reliability of the software
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 325-334Views (last year): 29.Evaluation of software reliability is an important part of the process of developing modern software. Many studies are aimed at improving models for measuring and predicting the reliability of software products. However, little attention is paid to approaches to comparing existing systems in terms of software reliability. Despite the enormous importance for practice (and for managing software development), a complete and proven comparison methodology does not exist. In this article, we propose a software reliability comparison methodology in which software reliability growth models are widely used. The proposed methodology has the following features: it provides certain level of flexibility and abstraction while keeping objectivity, i.e. providing measurable comparison criteria. Also, given the comparison methodology with a set of SRGMs and evaluation criteria it becomes much easier to disseminate information about reliability of wide range of software systems. The methodology was evaluated on the example of three mobile operating systems with open source: Sailfish, Tizen, CyanogenMod.
A byproduct of our study is a comparison of the three analyzed Open Source mobile operating systems. The goal of this research is to determine which OS is stronger in terms of reliability. To this end we have performed a GQM analysis and we have identified 3 questions and 8 metrics. Considering the comparison of metrics, it appears that Sailfish is in most case the best performing OS. However, it is also the OS that performs the worst in most cases. On the contrary, Tizen scores the best in 3 cases out of 8, but the worst only in one case out of 8.
-
The Solver of Boltzmann equation on unstructured spatial grids
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 427-447Views (last year): 13.The purpose of this work is to develop a universal computer program (solver) which solves kinetic Boltzmann equation for simulations of rarefied gas flows in complexly shaped devices. The structure of the solver is described in details. Its efficiency is demonstrated on an example of calculations of a modern many tubes Knudsen pump. The kinetic Boltzmann equation is solved by finite-difference method on discrete grid in spatial and velocity spaces. The differential advection operator is approximated by finite difference method. The calculation of the collision integral is based on the conservative projection method.
In the developed computational program the unstructured spatial mesh is generated using GMSH and may include prisms, tetrahedrons, hexahedrons and pyramids. The mesh is denser in areas of flow with large gradients of gas parameters. A three-dimensional velocity grid consists of cubic cells of equal volume.
A huge amount of calculations requires effective parallelization of the algorithm which is implemented in the program with the use of Message Passing Interface (MPI) technology. An information transfer from one node to another is implemented as a kind of boundary condition. As a result, every MPI node contains the information about only its part of the grid.
The main result of the work is presented in the graph of pressure difference in 2 reservoirs connected by a multitube Knudsen pump from Knudsen number. This characteristic of the Knudsen pump obtained by numerical methods shows the quality of the pump. Distributions of pressure, temperature and gas concentration in a steady state inside the pump and the reservoirs are presented as well.
The correctness of the solver is checked using two special test solutions of more simple boundary problems — test with temperature distribution between 2 planes with different temperatures and test with conservation of total gas mass.
The correctness of the obtained data for multitube Knudsen pump is checked using denser spatial and velocity grids, using more collisions in collision integral per time step.
-
A modified model of the effect of stress concentration near a broken fiber on the tensile strength of high-strength composites (MLLS-6)
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 559-573The article proposes a model for assessing the potential strength of a composite material based on modern fibers with brittle fracture.
Materials consisting of parallel cylindrical fibers that are quasi-statically stretched in one direction are simulated. It is assumed that the sample is not less than 100 pieces, which corresponds to almost significant cases. It is known that the fibers have a distribution of ultimate deformation in the sample and are not destroyed at the same moment. Usually the distribution of their properties is described by the Weibull–Gnedenko statistical distribution. To simulate the strength of the composite, a model of fiber breaks accumulation is used. It is assumed that the fibers united by the polymer matrix are crushed to twice the inefficient length — the distance at which the stresses increase from the end of the broken fiber to the middle one. However, this model greatly overestimates the strength of composites with brittle fibers. For example, carbon and glass fibers are destroyed in this way.
In some cases, earlier attempts were made to take into account the stress concentration near the broken fiber (Hedgepest model, Ermolenko model, shear analysis), but such models either required a lot of initial data or did not coincide with the experiment. In addition, such models idealize the packing of fibers in the composite to the regular hexagonal packing.
The model combines the shear analysis approach to stress distribution near the destroyed fiber and the statistical approach of fiber strength based on the Weibull–Gnedenko distribution, while introducing a number of assumptions that simplify the calculation without loss of accuracy.
It is assumed that the stress concentration on the adjacent fiber increases the probability of its destruction in accordance with the Weibull distribution, and the number of such fibers with an increased probability of destruction is directly related to the number already destroyed before. All initial data can be obtained from simple experiments. It is shown that accounting for redistribution only for the nearest fibers gives an accurate forecast.
This allowed a complete calculation of the strength of the composite. The experimental data obtained by us on carbon fibers, glass fibers and model composites based on them (CFRP, GFRP), confirm some of the conclusions of the model.
-
Finding equilibrium in two-stage traffic assignment model
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 365-379Authors describe a two-stage traffic assignment model. It contains of two blocks. The first block consists of a model for calculating a correspondence (demand) matrix, whereas the second block is a traffic assignment model. The first model calculates a matrix of correspondences using a matrix of transport costs (it characterizes the required volumes of movement from one area to another, it is time in this case). To solve this problem, authors propose to use one of the most popular methods of calculating the correspondence matrix in urban studies — the entropy model. The second model describes exactly how the needs for displacement specified by the correspondence matrix are distributed along the possible paths. Knowing the ways of the flows distribution along the paths, it is possible to calculate the cost matrix. Equilibrium in a two-stage model is a fixed point in the sequence of these two models. In practice the problem of finding a fixed point can be solved by the fixed-point iteration method. Unfortunately, at the moment the issue of convergence and estimations of the convergence rate for this method has not been studied quite thoroughly. In addition, the numerical implementation of the algorithm results in many problems. In particular, if the starting point is incorrect, situations may arise where the algorithm requires extremely large numbers to be computed and exceeds the available memory even on the most modern computers. Therefore the article proposes a method for reducing the problem of finding the equilibrium to the problem of the convex non-smooth optimization. Also a numerical method for solving the obtained optimization problem is proposed. Numerical experiments were carried out for both methods of solving the problem. The authors used data for Vladivostok (for this city information from various sources was processed and collected in a new dataset) and two smaller cities in the USA. It was not possible to achieve convergence by the method of fixed-point iteration, whereas the second model for the same dataset demonstrated convergence rate $k^{-1.67}$.
-
First-order optimization methods are workhorses in a wide range of modern applications in economics, physics, biology, machine learning, control, and other fields. Among other first-order methods accelerated and momentum ones obtain special attention because of their practical efficiency. The heavy-ball method (HB) is one of the first momentum methods. The method was proposed in 1964 and the first analysis was conducted for quadratic strongly convex functions. Since then a number of variations of HB have been proposed and analyzed. In particular, HB is known for its simplicity in implementation and its performance on nonconvex problems. However, as other momentum methods, it has nonmonotone behavior, and for optimal parameters, the method suffers from the so-called peak effect. To address this issue, in this paper, we consider an averaged version of the heavy-ball method (AHB). We show that for quadratic problems AHB has a smaller maximal deviation from the solution than HB. Moreover, for general convex and strongly convex functions, we prove non-accelerated rates of global convergence of AHB, its weighted version WAHB, and for AHB with restarts R-AHB. To the best of our knowledge, such guarantees for HB with averaging were not explicitly proven for strongly convex problems in the existing works. Finally, we conduct several numerical experiments on minimizing quadratic and nonquadratic functions to demonstrate the advantages of using averaging for HB. Moreover, we also tested one more modification of AHB called the tail-averaged heavy-ball method (TAHB). In the experiments, we observed that HB with a properly adjusted averaging scheme converges faster than HB without averaging and has smaller oscillations.
-
Numerical study of high-speed mixing layers based on a two-fluid turbulence model
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1125-1142This work is devoted to the numerical study of high-speed mixing layers of compressible flows. The problem under consideration has a wide range of applications in practical tasks and, despite its apparent simplicity, is quite complex in terms of modeling. Because in the mixing layer, as a result of the instability of the tangential discontinuity of velocities, the flow passes from laminar flow to turbulent mode. Therefore, the obtained numerical results of the considered problem strongly depend on the adequacy of the used turbulence models. In the presented work, this problem is studied based on the two-fluid approach to the problem of turbulence. This approach has arisen relatively recently and is developing quite rapidly. The main advantage of the two-fluid approach is that it leads to a closed system of equations, when, as is known, the long-standing Reynolds approach leads to an open system of equations. The paper presents the essence of the two-fluid approach for modeling a turbulent compressible medium and the methodology for numerical implementation of the proposed model. To obtain a stationary solution, the relaxation method and Prandtl boundary layer theory were applied, resulting in a simplified system of equations. In the considered problem, high-speed flows are mixed. Therefore, it is also necessary to model heat transfer, and the pressure cannot be considered constant, as is done for incompressible flows. In the numerical implementation, the convective terms in the hydrodynamic equations were approximated by the upwind scheme with the second order of accuracy in explicit form, and the diffusion terms in the right-hand sides of the equations were approximated by the central difference in implicit form. The sweep method was used to implement the obtained equations. The SIMPLE method was used to correct the velocity through the pressure. The paper investigates a two-liquid turbulence model with different initial flow turbulence intensities. The obtained numerical results showed that good agreement with the known experimental data is observed at the inlet turbulence intensity of $0.1 < I < 1 \%$. Data from known experiments, as well as the results of the $k − kL + J$ and LES models, are presented to demonstrate the effectiveness of the proposed turbulence model. It is demonstrated that the two-liquid model is as accurate as known modern models and more efficient in terms of computing resources.
-
Mathematical model and heuristic methods of distributed computations organizing in the Internet of Things systems
Computer Research and Modeling, 2025, v. 17, no. 5, pp. 851-870Currently, a significant development has been observed in the direction of distributed computing theory, where computational tasks are solved collectively by resource-constrained devices. In practice, this scenario is implemented when processing data in Internet of Things systems, with the aim of reducing system latency and network infrastructure load, as data is processed on edge network computing devices. However, the rapid growth and widespread adoption of IoT systems raise questions about the need to develop methods for reducing the resource intensity of computations. The resource constraints of computing devices pose the following issues regarding the distribution of computational resources: firstly, the necessity to account for the transit cost between different devices solving various tasks; secondly, the necessity to consider the resource cost associated directly with the process of distributing computational resources, which is particularly relevant for groups of autonomous devices such as drones or robots. An analysis of modern publications available in open access demonstrated the absence of proposed models or methods for distributing computational resources that would simultaneously take into account all these factors, making the creation of a new mathematical model for organizing distributed computing in IoT systems and its solution methods topical. This article proposes a novel mathematical model for distributing computational resources along with heuristic optimization methods, providing an integrated approach to implementing distributed computing in IoT systems. A scenario is considered where there exists a leader device within a group that makes decisions concerning the allocation of computational resources, including its own, for distributed task resolution involving information exchanges. It is also assumed that no prior knowledge exists regarding which device will assume the role of leader or the migration paths of computational tasks across devices. Experimental results have shown the effectiveness of using the proposed models and heuristics: achieving up to a 52% reduction in resource costs for solving computational problems while accounting for data transit costs, saving up to 73% of resources through supplementary criteria optimizing task distribution based on minimizing fragment migrations and distances, and decreasing the resource cost of resolving the computational resource distribution problem by up to 28 times with reductions in distribution quality up to 10%.
-
Computational investigation of aerodynamic performance of the generic flying-wing aircraft model using FlowVision computational code
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 67-74Views (last year): 10. Citations: 1 (RSCI).Modern approach to modernization of the experimental techniques involves design of mathematical models of the wind-tunnel, which are also referred to as Electronic of Digital Wind-Tunnels. They are meant to supplement experimental data with computational analysis. Using Electronic Wind-Tunnels is supposed to provide accurate information on aerodynamic performance of an aircraft basing on a set of experimental data, to obtain agreement between data from different test facilities and perform comparison between computational results for flight conditions and data with the presence of support system and test section.
Completing this task requires some preliminary research, which involves extensive wind-tunnel testing as well as RANS-based computational research with the use of supercomputer technologies. At different stages of computational investigation one may have to model not only the aircraft itself but also the wind-tunnel test section and the model support system. Modelling such complex geometries will inevitably result in quite complex vertical and separated flows one will have to simulate. Another problem is that boundary layer transition is often present in wind-tunnel testing due to quite small model scales and therefore low Reynolds numbers.
In the current article the first stage of the Electronic Wind-Tunnel design program is covered. This stage involves computational investigation of aerodynamic characteristics of the generic flying-wing UAV model previously tested in TsAGI T-102 wind-tunnel. Since this stage is preliminary the model was simulated without taking test-section and support system geometry into account. The boundary layer was considered to be fully turbulent.
For the current research FlowVision computational code was used because of its automatic grid generation feature and stability of the solver when simulating complex flows. A two-equation k–ε turbulence model was used with special wall functions designed to properly capture flow separation. Computed lift force and drag force coefficients for different angles-of-attack were compared to the experimental data.
-
To the problem of program implementation of the potential-streaming method of description of physical and chemical process
Computer Research and Modeling, 2018, v. 10, no. 6, pp. 817-832Views (last year): 12.In the framework of modern non-equilibrium thermodynamics (macroscopic approach of description and mathematical modeling of the dynamics of real physical and chemical processes), the authors developed a potential- flow method for describing and mathematical modeling of real physical and chemical processes applicable in the general case of real macroscopic physicochemical systems. In accordance with the potential-flow method, the description and mathematical modeling of these processes consists in determining through the interaction potentials of the thermodynamic forces driving these processes and the kinetic matrix determined by the kinetic properties of the system in question, which in turn determine the dynamics of the course of physicochemical processes in this system under the influence of the thermodynamic forces in it. Knowing the thermodynamic forces and the kinetic matrix of the system, the rates of the flow of physicochemical processes in the system are determined, and according to these conservation laws the rates of change of its state coordinates are determined. It turns out in this way a closed system of equations of physical and chemical processes in the system. Knowing the interaction potentials in the system, the kinetic matrices of its simple subsystems (individual processes that are conjugate to each other and not conjugate with other processes), the coefficients entering into the conservation laws, the initial state of the system under consideration, external flows into the system, one can obtain a complete dynamics of physicochemical processes in the system. However, in the case of a complex physico-chemical system in which a large number of physicochemical processes take place, the dimension of the system of equations for these processes becomes appropriate. Hence, the problem arises of automating the formation of the described system of equations of the dynamics of physical and chemical processes in the system under consideration. In this article, we develop a library of software data types that implement a user-defined physicochemical system at the level of its design scheme (coordinates of the state of the system, energy degrees of freedom, physico-chemical processes, flowing, external flows and the relationship between these listed components) and algorithms references in these types of data, as well as calculation of the described system parameters. This library includes both program types of the calculation scheme of the user-defined physicochemical system, and program data types of the components of this design scheme (coordinates of the system state, energy degrees of freedom, physicochemical processes, flowing, external flows). The relationship between these components is carried out by reference (index) addressing. This significantly speeds up the calculation of the system characteristics, because faster access to data.
-
Hypergraph approach in the decomposition of complex technical systems
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1007-1022The article considers a mathematical model of decomposition of a complex product into assembly units. This is an important engineering problem, which affects the organization of discrete production and its operational management. A review of modern approaches to mathematical modeling and automated computer-aided of decompositions is given. In them, graphs, networks, matrices, etc. serve as mathematical models of structures of technical systems. These models describe the mechanical structure as a binary relation on a set of system elements. The geometrical coordination and integrity of machines and mechanical devices during the manufacturing process is achieved by means of basing. In general, basing can be performed on several elements simultaneously. Therefore, it represents a variable arity relation, which can not be correctly described in terms of binary mathematical structures. A new hypergraph model of mechanical structure of technical system is described. This model allows to give an adequate formalization of assembly operations and processes. Assembly operations which are carried out by two working bodies and consist in realization of mechanical connections are considered. Such operations are called coherent and sequential. This is the prevailing type of operations in modern industrial practice. It is shown that the mathematical description of such operation is normal contraction of an edge of the hypergraph. A sequence of contractions transforming the hypergraph into a point is a mathematical model of the assembly process. Two important theorems on the properties of contractible hypergraphs and their subgraphs proved by the author are presented. The concept of $s$-hypergraphs is introduced. $S$-hypergraphs are the correct mathematical models of mechanical structures of any assembled technical systems. Decomposition of a product into assembly units is defined as cutting of an $s$-hypergraph into $s$-subgraphs. The cutting problem is described in terms of discrete mathematical programming. Mathematical models of structural, topological and technological constraints are obtained. The objective functions are proposed that formalize the optimal choice of design solutions in various situations. The developed mathematical model of product decomposition is flexible and open. It allows for extensions that take into account the characteristics of the product and its production.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




