All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Bottom stability in closed conduits
Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1061-1068Views (last year): 1. Citations: 2 (RSCI).In this paper on the basis of the riverbed model proposed earlier the one-dimensional stability problem of closed flow channel with sandy bed is solved. The feature of the investigated problem is used original equation of riverbed deformations, which takes into account the influence of mechanical and granulometric bed material characteristics and the bed slope when riverbed analyzing. Another feature of the discussed problem is the consideration together with shear stress influence normal stress influence when investigating the riverbed instability. The analytical dependence determined the wave length of fast-growing bed perturbations is obtained from the solution of the sandy bed stability problem for closed flow channel. The analysis of the obtained analytical dependence is performed. It is shown that the obtained dependence generalizes the row of well-known empirical formulas: Coleman, Shulyak and Bagnold. The structure of the obtained analytical dependence denotes the existence of two hydrodynamic regimes characterized by the Froude number, at which the bed perturbations growth can strongly or weakly depend on the Froude number. Considering a natural stochasticity of the waves movement process and the presence of a definition domain of the solution with a weak dependence on the Froude numbers it can be concluded that the experimental observation of the of the bed waves movement development should lead to the data acquisition with a significant dispersion and it occurs in reality.
-
The tests for checking of the parallel organization in logical calculation which are based on the algebra and the automats
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 621-638Views (last year): 14. Citations: 1 (RSCI).We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.
-
Parallel implementation of the grid-characteristic method in the case of explicit contact boundaries
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 667-678Views (last year): 18.We consider an application of the Message Passing Interface (MPI) technology for parallelization of the program code which solves equation of the linear elasticity theory. The solution of this equation describes the propagation of elastic waves in demormable rigid bodies. The solution of such direct problem of seismic wave propagation is of interest in seismics and geophysics. Our implementation of solver uses grid-characteristic method to make simulations. We consider technique to reduce time of communication between MPI processes during the simulation. This is important when it is necessary to conduct modeling in complex problem formulations, and still maintain the high level of parallelism effectiveness, even when thousands of processes are used. A solution of the problem of effective communication is extremely important when several computational grids with arbirtrary geometry of contacts between them are used in the calculation. The complexity of this task increases if an independent distribution of the grid nodes between processes is allowed. In this paper, a generalized approach is developed for processing contact conditions in terms of nodes reinterpolation from a given section of one grid to a certain area of the second grid. An efficient way of parallelization and establishing effective interprocess communications is proposed. For provided example problems we provide wave fileds and seismograms for both 2D and 3D formulations. It is shown that the algorithm can be realized both on Cartesian and on structured (curvilinear) computational grids. The considered statements demonstrate the possibility of carrying out calculations taking into account the surface topographies and curvilinear geometry of curvilinear contacts between the geological layers. Application of curvilinear grids allows to obtain more accurate results than when calculating only using Cartesian grids. The resulting parallelization efficiency is almost 100% up to 4096 processes (we used 128 processes as a basis to find efficiency). With number of processes larger than 4096, an expected gradual decrease in efficiency is observed. The rate of decline is not great, so at 16384 processes the parallelization efficiency remains at 80%.
-
Effect of buoyancy force on mixed convection of a variable density fluid in a square lid-driven cavity
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 575-595The paper considers the problem of stationary mixed convection and heat transfer of a viscous heatconducting fluid in a plane square lid-driven cavity. The hot top cover of the cavity has any temperature $T_\mathrm{H}$ and cold bottom wall has temperature $T_\mathrm{0} (T_\mathrm{H} > T_\mathrm{0})$, whereas in contrast the side walls are insulated. The fact that the fluid density can take arbitrary values depending on the amount of overheating of the cavity cover is a feature of the problem. The mathematical formulation includes the Navier–Stokes equations in the ’velocity–pressure’ variables and the heat balance equation which take into account the incompressibility of the fluid flow and the influence of volumetric buoyancy force. The difference approximation of the original differential equations has been performed by the control volume method. Numerical solutions of the problem have been obtained on the $501 \times 501$ grid for the following values of similarity parameters: Prandtl number Pr = 0.70; Reynolds number Re = 100 and 1000; Richardson number Ri = 0.1, 1, and 10; and the relative cover overheating $(T_\mathrm{H}-T_\mathrm{0})/T_\mathrm{0} = 0, 1, 2, 3$. Detailed flow patterns in the form of streamlines and isotherms of relative overheating of the fluid flow are given in the work. It is shown that the increase in the value of the Richardson number (the increase in the influence of buoyancy force) leads to a fundamental change in the structure of the liquid stream. It is also found out that taking into account the variability of the liquid density leads to weakening of the influence of Ri growth on the transformation of the flow structure. The change in density in a closed volume is the cause of this weakening, since it always leads to the existence of zones with negative buoyancy in the presence of a volumetric force. As a consequence, the competition of positive and negative volumetric forces leads in general to weakening of the buoyancy effect. The behaviors of heat exchange coefficient (Nusselt number) and coefficient of friction along the bottom wall of the cavity depending on the parameters of the problem are also analyzed. It is revealed that the greater the values of the Richardson number are, the greater, ceteris paribus, the influence of density variation on these coefficients is.
-
Hypergraph approach in the decomposition of complex technical systems
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1007-1022The article considers a mathematical model of decomposition of a complex product into assembly units. This is an important engineering problem, which affects the organization of discrete production and its operational management. A review of modern approaches to mathematical modeling and automated computer-aided of decompositions is given. In them, graphs, networks, matrices, etc. serve as mathematical models of structures of technical systems. These models describe the mechanical structure as a binary relation on a set of system elements. The geometrical coordination and integrity of machines and mechanical devices during the manufacturing process is achieved by means of basing. In general, basing can be performed on several elements simultaneously. Therefore, it represents a variable arity relation, which can not be correctly described in terms of binary mathematical structures. A new hypergraph model of mechanical structure of technical system is described. This model allows to give an adequate formalization of assembly operations and processes. Assembly operations which are carried out by two working bodies and consist in realization of mechanical connections are considered. Such operations are called coherent and sequential. This is the prevailing type of operations in modern industrial practice. It is shown that the mathematical description of such operation is normal contraction of an edge of the hypergraph. A sequence of contractions transforming the hypergraph into a point is a mathematical model of the assembly process. Two important theorems on the properties of contractible hypergraphs and their subgraphs proved by the author are presented. The concept of $s$-hypergraphs is introduced. $S$-hypergraphs are the correct mathematical models of mechanical structures of any assembled technical systems. Decomposition of a product into assembly units is defined as cutting of an $s$-hypergraph into $s$-subgraphs. The cutting problem is described in terms of discrete mathematical programming. Mathematical models of structural, topological and technological constraints are obtained. The objective functions are proposed that formalize the optimal choice of design solutions in various situations. The developed mathematical model of product decomposition is flexible and open. It allows for extensions that take into account the characteristics of the product and its production.
-
Modeling the response of polycrystalline ferroelectrics to high-intensity electric and mechanical fields
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 93-113A mathematical model describing the irreversible processes of polarization and deformation of polycrystalline ferroelectrics in external electric and mechanical fields of high intensity is presented, as a result of which the internal structure changes and the properties of the material change. Irreversible phenomena are modeled in a three-dimensional setting for the case of simultaneous action of an electric field and mechanical stresses. The object of the research is a representative volume in which the residual phenomena in the form of the induced and irreversible parts of the polarization vector and the strain tensor are investigated. The main task of modeling is to construct constitutive relations connecting the polarization vector and strain tensor, on the one hand, and the electric field vector and mechanical stress tensor, on the other hand. A general case is considered when the direction of the electric field may not coincide with any of the main directions of the tensor of mechanical stresses. For reversible components, the constitutive relations are constructed in the form of linear tensor equations, in which the modules of elasticity and dielectric permeability depend on the residual strain, and the piezoelectric modules depend on the residual polarization. The constitutive relations for irreversible parts are constructed in several stages. First, an auxiliary model was constructed for the ideal or unhysteretic case, when all vectors of spontaneous polarization can rotate in the fields of external forces without mutual influence on each other. A numerical method is proposed for calculating the resulting values of the maximum possible polarization and deformation values of an ideal case in the form of surface integrals over the unit sphere with the distribution density obtained from the statistical Boltzmann law. After that the estimates of the energy costs required for breaking down the mechanisms holding the domain walls are made, and the work of external fields in real and ideal cases is calculated. On the basis of this, the energy balance was derived and the constitutive relations for irreversible components in the form of equations in differentials were obtained. A scheme for the numerical solution of these equations has been developed to determine the current values of the irreversible required characteristics in the given electrical and mechanical fields. For cyclic loads, dielectric, deformation and piezoelectric hysteresis curves are plotted.
The developed model can be implanted into a finite element complex for calculating inhomogeneous residual polarization and deformation fields with subsequent determination of the physical modules of inhomogeneously polarized ceramics as a locally anisotropic body.
-
On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.
In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.
-
Stochastic formalization of the gas dynamic hierarchy
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 767-779Mathematical models of gas dynamics and its computational industry, in our opinion, are far from perfect. We will look at this problem from the point of view of a clear probabilistic micro-model of a gas from hard spheres, relying on both the theory of random processes and the classical kinetic theory in terms of densities of distribution functions in phase space, namely, we will first construct a system of nonlinear stochastic differential equations (SDE), and then a generalized random and nonrandom integro-differential Boltzmann equation taking into account correlations and fluctuations. The key feature of the initial model is the random nature of the intensity of the jump measure and its dependence on the process itself.
Briefly recall the transition to increasingly coarse meso-macro approximations in accordance with a decrease in the dimensionalization parameter, the Knudsen number. We obtain stochastic and non-random equations, first in phase space (meso-model in terms of the Wiener — measure SDE and the Kolmogorov – Fokker – Planck equations), and then — in coordinate space (macro-equations that differ from the Navier – Stokes system of equations and quasi-gas dynamics systems). The main difference of this derivation is a more accurate averaging by velocity due to the analytical solution of stochastic differential equations with respect to the Wiener measure, in the form of which an intermediate meso-model in phase space is presented. This approach differs significantly from the traditional one, which uses not the random process itself, but its distribution function. The emphasis is placed on the transparency of assumptions during the transition from one level of detail to another, and not on numerical experiments, which contain additional approximation errors.
The theoretical power of the microscopic representation of macroscopic phenomena is also important as an ideological support for particle methods alternative to difference and finite element methods.
-
Repressilator with time-delayed gene expression. Part I. Deterministic description
Computer Research and Modeling, 2018, v. 10, no. 2, pp. 241-259Views (last year): 30.The repressor is the first genetic regulatory network in synthetic biology, which was artificially constructed in 2000. It is a closed network of three genetic elements — $lacI$, $\lambda cI$ and $tetR$, — which have a natural origin, but are not found in nature in such a combination. The promoter of each of the three genes controls the next cistron via the negative feedback, suppressing the expression of the neighboring gene. In this paper, the nonlinear dynamics of a modified repressilator, which has time delays in all parts of the regulatory network, has been studied for the first time. Delay can be both natural, i.e. arises during the transcription/translation of genes due to the multistage nature of these processes, and artificial, i.e. specially to be introduced into the work of the regulatory network using synthetic biology technologies. It is assumed that the regulation is carried out by proteins being in a dimeric form. The considered repressilator has two more important modifications: the location on the same plasmid of the gene $gfp$, which codes for the fluorescent protein, and also the presence in the system of a DNA sponge. In the paper, the nonlinear dynamics has been considered within the framework of the deterministic description. By applying the method of decomposition into fast and slow motions, the set of nonlinear differential equations with delay on a slow manifold has been obtained. It is shown that there exists a single equilibrium state which loses its stability in an oscillatory manner at certain values of the control parameters. For a symmetric repressilator, in which all three genes are identical, an analytical solution for the neutral Andronov–Hopf bifurcation curve has been obtained. For the general case of an asymmetric repressilator, neutral curves are found numerically. It is shown that the asymmetric repressor generally is more stable, since the system is oriented to the behavior of the most stable element in the network. Nonlinear dynamic regimes arising in a repressilator with increase of the parameters are studied in detail. It was found that there exists a limit cycle corresponding to relaxation oscillations of protein concentrations. In addition to the limit cycle, we found the slow manifold not associated with above cycle. This is the long-lived transitional regime, which reflects the process of long-term synchronization of pulsations in the work of individual genes. The obtained results are compared with the experimental data known from the literature. The place of the model proposed in the present work among other theoretical models of the repressilator is discussed.
-
Survey of convex optimization of Markov decision processes
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 329-353This article reviews both historical achievements and modern results in the field of Markov Decision Process (MDP) and convex optimization. This review is the first attempt to cover the field of reinforcement learning in Russian in the context of convex optimization. The fundamental Bellman equation and the criteria of optimality of policy — strategies based on it, which make decisions based on the known state of the environment at the moment, are considered. The main iterative algorithms of policy optimization based on the solution of the Bellman equations are also considered. An important section of this article was the consideration of an alternative to the $Q$-learning approach — the method of direct maximization of the agent’s average reward for the chosen strategy from interaction with the environment. Thus, the solution of this convex optimization problem can be represented as a linear programming problem. The paper demonstrates how the convex optimization apparatus is used to solve the problem of Reinforcement Learning (RL). In particular, it is shown how the concept of strong duality allows us to naturally modify the formulation of the RL problem, showing the equivalence between maximizing the agent’s reward and finding his optimal strategy. The paper also discusses the complexity of MDP optimization with respect to the number of state–action–reward triples obtained as a result of interaction with the environment. The optimal limits of the MDP solution complexity are presented in the case of an ergodic process with an infinite horizon, as well as in the case of a non-stationary process with a finite horizon, which can be restarted several times in a row or immediately run in parallel in several threads. The review also reviews the latest results on reducing the gap between the lower and upper estimates of the complexity of MDP optimization with average remuneration (Averaged MDP, AMDP). In conclusion, the real-valued parametrization of agent policy and a class of gradient optimization methods through maximizing the $Q$-function of value are considered. In particular, a special class of MDPs with restrictions on the value of policy (Constrained Markov Decision Process, CMDP) is presented, for which a general direct-dual approach to optimization with strong duality is proposed.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"