All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Numerical Simulation, Parallel Algorithms and Software for Performance Forecast of the System “Fractured-Porous Reservoir – Producing Well” During its Commissioning Into Operation
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1069-1075The mathematical model, finite-difference schemes and algorithms for computation of transient thermoand hydrodynamic processes involved in commissioning the unified system including the oil producing well, electrical submersible pump and fractured-porous reservoir with bottom water are developed. These models are implemented in the computer package to simulate transient processes with simultaneous visualization of their results along with computations. An important feature of the package Oil-RWP is its interaction with the special external program GCS which simulates the work of the surface electric control station and data exchange between these two programs. The package Oil-RWP sends telemetry data and current parameters of the operating submersible unit to the program module GCS (direct coupling). The station controller analyzes incoming data and generates the required control parameters for the submersible pump. These parameters are sent to Oil-RWP (feedback). Such an approach allows us to consider the developed software as the “Intellectual Well System”.
Some principal results of the simulations can be briefly presented as follows. The transient time between inaction and quasi-steady operation of the producing well depends on the well stream watering, filtration and capacitive parameters of oil reservoir, physical-chemical properties of phases and technical characteristics of the submersible unit. For the large time solution of the nonstationary equations governing the nonsteady processes is practically identical to the inverse quasi-stationary problem solution with the same initial data. The developed software package is an effective tool for analysis, forecast and optimization of the exploiting parameters of the unified oil-producing complex during its commissioning into the operating regime.
-
On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.
In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.
-
Transport modeling: averaging price matrices
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 317-327This paper considers various approaches to averaging the generalized travel costs calculated for different modes of travel in the transportation network. The mode of transportation is understood to mean both the mode of transport, for example, a car or public transport, and movement without the use of transport, for example, on foot. The task of calculating the trip matrices includes the task of calculating the total matrices, in other words, estimating the total demand for movements by all modes, as well as the task of splitting the matrices according to the mode, also called modal splitting. To calculate trip matrices, gravitational, entropy and other models are used, in which the probability of movement between zones is estimated based on a certain measure of the distance of these zones from each other. Usually, the generalized cost of moving along the optimal path between zones is used as a distance measure. However, the generalized cost of movement differs for different modes of movement. When calculating the total trip matrices, it becomes necessary to average the generalized costs by modes of movement. The averaging procedure is subject to the natural requirement of monotonicity in all arguments. This requirement is not met by some commonly used averaging methods, for example, averaging with weights. The problem of modal splitting is solved by applying the methods of discrete choice theory. In particular, within the framework of the theory of discrete choice, correct methods have been developed for averaging the utility of alternatives that are monotonic in all arguments. The authors propose some adaptation of the methods of the theory of discrete choice for application to the calculation of the average cost of movements in the gravitational and entropy models. The transfer of averaging formulas from the context of the modal splitting model to the trip matrix calculation model requires the introduction of new parameters and the derivation of conditions for the possible value of these parameters, which was done in this article. The issues of recalibration of the gravitational function, which is necessary when switching to a new averaging method, if the existing function is calibrated taking into account the use of the weighted average cost, were also considered. The proposed methods were implemented on the example of a small fragment of the transport network. The results of calculations are presented, demonstrating the advantage of the proposed methods.
-
The choosing of optimal cell parameters of transcatheter aortic valve prosthesis
Computer Research and Modeling, 2014, v. 6, no. 6, pp. 943-954Views (last year): 1. Citations: 1 (RSCI).This paper presents the analysis of dependences between frame basic cell geometry parameters and function via finite element analysis. The simplified models of frame cell with varied strut width, thickness and quantity in a circle was researched to evaluate radial forces, maximum stress and strain, permanent residual strain and pinching load forces. The outcomes of this study might help in the development of new artificial heart valves and during the analysis of existing in-clinical TAVI prostheses.
-
The analysis of player’s behaviour in modified “Sea battle” game
Computer Research and Modeling, 2016, v. 8, no. 5, pp. 817-827Views (last year): 18.The well-known “Sea battle” game is in the focus of the current job. The main goal of the article is to provide modified version of “Sea battle” game and to find optimal players’ strategies in the new rules. Changes were applied to attacking strategies (new option to attack hitting four cells in one shot was added) as well as to the size of the field (sizes of 10 × 10, 20 × 20, 30 × 30 were used) and to the rules of disposal algorithms during the game (new possibility to move the ship off the attacking zone). The game was solved with the use of game theory capabilities: payoff matrices were found for each version of altered rules, for which optimal pure and mixed strategies were discovered. For solving payoff matrices iterative method was used. The simulation was in applying five attacking algorithms and six disposal ones with parameters variation due to the game of players with each other. Attacking algorithms were varied in 100 sets of parameters, disposal algorithms — in 150 sets. Major result is that using such algorithms the modified “Sea battle” game can be solved — that implies the possibility of finding stable pure and mixed strategies of behaviour, which guarantee the sides gaining optimal results in game theory terms. Moreover, influence of modifying the rules of “Sea battle” game is estimated. Comparison with prior authors’ results on this topic was made. Based on matching the payoff matrices with the statistical analysis, completed earlier, it was found out that standard “Sea battle” game could be represented as a special case of game modifications, observed in this article. The job is important not only because of its applications in war area, but in civil areas as well. Use of article’s results could save resources in exploration, provide an advantage in war conflicts, defend devices under devastating impact.
-
Numerical simulation of ethylene combustion in supersonic air flow
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 75-86Views (last year): 8. Citations: 3 (RSCI).In the present paper, we discuss the possibility of a simplified three-dimensional unsteady simulation of plasma-assisted combustion of gaseous fuel in a supersonic airflow. Simulation was performed by using FlowVision CFD software. Analysis of experimental geometry show that it has essentially 3D nature that conditioned by the discrete fuel injection into the flow as well as by the presence of the localized plasma filaments. Study proposes a variant of modeling geometry simplification based on symmetry of the aerodynamic duct and periodicity of the spatial inhomogeneities. Testing of modified FlowVision $k–\varepsilon$ turbulence model named «KEFV» was performed for supersonic flow conditions. Based on that detailed grid without wall functions was used the field of heat and near fuel injection area and surfaces remote from the key area was modeled with using of wall functions, that allowed us to significantly reduce the number of cells of the computational grid. Two steps significantly simplified a complex problem of the hydrocarbon fuel ignition by means of plasma generation. First, plasma formations were simulated by volumetric heat sources and secondly, fuel combustion is reduced to one brutto reaction. Calibration and parametric optimization of the fuel injection into the supersonic flow for IADT-50 JIHT RAS wind tunnel is made by means of simulation using FlowVision CFD software. Study demonstrates a rather good agreement between the experimental schlieren photo of the flow with fuel injection and synthetical one. Modeling of the flow with fuel injection and plasma generation for the facility T131 TSAGI combustion chamber geometry demonstrates a combustion mode for the set of experimental parameters. Study emphasizes the importance of the computational mesh adaptation and spatial resolution increasing for the volumetric heat sources that model electric discharge area. A reasonable qualitative agreement between experimental pressure distribution and modeling one confirms the possibility of limited application of such simplified modeling for the combustion in high-speed flow.
-
Simulation of interprocessor interactions for MPI-applications in the cloud infrastructure
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 955-963Views (last year): 10. Citations: 1 (RSCI).А new cloud center of parallel computing is to be created in the Laboratory of Information Technologies (LIT) of the Joint Institute for Nuclear Research JINR) what is expected to improve significantly the efficiency of numerical calculations and expedite the receipt of new physically meaningful results due to the more rational use of computing resources. To optimize a scheme of parallel computations at a cloud environment it is necessary to test this scheme for various combinations of equipment parameters (processor speed and numbers, throughput оf а communication network etc). As a test problem, the parallel MPI algorithm for calculations of the long Josephson junctions (LDJ) is chosen. Problems of evaluating the impact of abovementioned factors of computing mean on the computing speed of the test problem are solved by simulation with the simulation program SyMSim developed in LIT.
The simulation of the LDJ calculations in the cloud environment enable users without a series of test to find the optimal number of CPUs with a certain type of network run the calculations in a real computer environment. This can save significant computational time in countable resources. The main parameters of the model were obtained from the results of the computational experiment conducted on a special cloud-based testbed. Computational experiments showed that the pure computation time decreases in inverse proportion to the number of processors, but depends significantly on network bandwidth. Comparison of results obtained empirically with the results of simulation showed that the simulation model correctly simulates the parallel calculations performed using the MPI-technology. Besides it confirms our recommendation: for fast calculations of this type it is needed to increase both, — the number of CPUs and the network throughput at the same time. The simulation results allow also to invent an empirical analytical formula expressing the dependence of calculation time by the number of processors for a fixed system configuration. The obtained formula can be applied to other similar studies, but requires additional tests to determine the values of variables.
-
Solution of the problem of optimal control of the process of methanogenesis based on the Pontryagin maximum principle
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 357-367The paper presents a mathematical model that describes the process of obtaining biogas from livestock waste. This model describes the processes occurring in a biogas plant for mesophilic and thermophilic media, as well as for continuous and periodic modes of substrate inflow. The values of the coefficients of this model found earlier for the periodic mode, obtained by solving the problem of model identification from experimental data using a genetic algorithm, are given.
For the model of methanogenesis, an optimal control problem is formulated in the form of a Lagrange problem, whose criterial functionality is the output of biogas over a certain period of time. The controlling parameter of the task is the rate of substrate entry into the biogas plant. An algorithm for solving this problem is proposed, based on the numerical implementation of the Pontryagin maximum principle. In this case, a hybrid genetic algorithm with an additional search in the vicinity of the best solution using the method of conjugate gradients was used as an optimization method. This numerical method for solving an optimal control problem is universal and applicable to a wide class of mathematical models.
In the course of the study, various modes of submission of the substrate to the digesters, temperature environments and types of raw materials were analyzed. It is shown that the rate of biogas production in the continuous feed mode is 1.4–1.9 times higher in the mesophilic medium (1.9–3.2 in the thermophilic medium) than in the periodic mode over the period of complete fermentation, which is associated with a higher feed rate of the substrate and a greater concentration of nutrients in the substrate. However, the yield of biogas during the period of complete fermentation with a periodic mode is twice as high as the output over the period of a complete change of the substrate in the methane tank at a continuous mode, which means incomplete processing of the substrate in the second case. The rate of biogas formation for a thermophilic medium in continuous mode and the optimal rate of supply of raw materials is three times higher than for a mesophilic medium. Comparison of biogas output for various types of raw materials shows that the highest biogas output is observed for waste poultry farms, the least — for cattle farms waste, which is associated with the nutrient content in a unit of substrate of each type.
-
System modeling, risks evaluation and optimization of a distributed computer system
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1349-1359The article deals with the problem of a distributed system operation reliability. The system core is an open integration platform that provides interaction of varied software for modeling gas transportation. Some of them provide an access through thin clients on the cloud technology “software as a service”. Mathematical models of operation, transmission and computing are to ensure the operation of an automated dispatching system for oil and gas transportation. The paper presents a system solution based on the theory of Markov random processes and considers the stable operation stage. The stationary operation mode of the Markov chain with continuous time and discrete states is described by a system of Chapman–Kolmogorov equations with respect to the average numbers (mathematical expectations) of the objects in certain states. The objects of research are both system elements that are present in a large number – thin clients and computing modules, and individual ones – a server, a network manager (message broker). Together, they are interacting Markov random processes. The interaction is determined by the fact that the transition probabilities in one group of elements depend on the average numbers of other elements groups.
The authors propose a multi-criteria dispersion model of risk assessment for such systems (both in the broad and narrow sense, in accordance with the IEC standard). The risk is the standard deviation of estimated object parameter from its average value. The dispersion risk model makes possible to define optimality criteria and whole system functioning risks. In particular, for a thin client, the following is calculated: the loss profit risk, the total risk of losses due to non-productive element states, and the total risk of all system states losses.
Finally the paper proposes compromise schemes for solving the multi-criteria problem of choosing the optimal operation strategy based on the selected set of compromise criteria.
-
A gradient method with inexact oracle for composite nonconvex optimization
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 321-334In this paper, we develop a new first-order method for composite nonconvex minimization problems with simple constraints and inexact oracle. The objective function is given as a sum of «hard», possibly nonconvex part, and «simple» convex part. Informally speaking, oracle inexactness means that, for the «hard» part, at any point we can approximately calculate the value of the function and construct a quadratic function, which approximately bounds this function from above. We give several examples of such inexactness: smooth nonconvex functions with inexact H¨older-continuous gradient, functions given by the auxiliary uniformly concave maximization problem, which can be solved only approximately. For the introduced class of problems, we propose a gradient-type method, which allows one to use a different proximal setup to adapt to the geometry of the feasible set, adaptively chooses controlled oracle error, allows for inexact proximal mapping. We provide a convergence rate for our method in terms of the norm of generalized gradient mapping and show that, in the case of an inexact Hölder-continuous gradient, our method is universal with respect to Hölder parameters of the problem. Finally, in a particular case, we show that the small value of the norm of generalized gradient mapping at a point means that a necessary condition of local minimum approximately holds at that point.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"