All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Bottom stability in closed conduits
Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1061-1068Views (last year): 1. Citations: 2 (RSCI).In this paper on the basis of the riverbed model proposed earlier the one-dimensional stability problem of closed flow channel with sandy bed is solved. The feature of the investigated problem is used original equation of riverbed deformations, which takes into account the influence of mechanical and granulometric bed material characteristics and the bed slope when riverbed analyzing. Another feature of the discussed problem is the consideration together with shear stress influence normal stress influence when investigating the riverbed instability. The analytical dependence determined the wave length of fast-growing bed perturbations is obtained from the solution of the sandy bed stability problem for closed flow channel. The analysis of the obtained analytical dependence is performed. It is shown that the obtained dependence generalizes the row of well-known empirical formulas: Coleman, Shulyak and Bagnold. The structure of the obtained analytical dependence denotes the existence of two hydrodynamic regimes characterized by the Froude number, at which the bed perturbations growth can strongly or weakly depend on the Froude number. Considering a natural stochasticity of the waves movement process and the presence of a definition domain of the solution with a weak dependence on the Froude numbers it can be concluded that the experimental observation of the of the bed waves movement development should lead to the data acquisition with a significant dispersion and it occurs in reality.
-
Game-theoretic model of coordinations of interests at innovative development of corporations
Computer Research and Modeling, 2016, v. 8, no. 4, pp. 673-684Views (last year): 9. Citations: 6 (RSCI).Dynamic game theoretic models of the corporative innovative development are investigated. The proposed models are based on concordance of private and public interests of agents. It is supposed that the structure of interests of each agent includes both private (personal interests) and public (interests of the whole company connected with its innovative development first) components. The agents allocate their personal resources between these two directions. The system dynamics is described by a difference (not differential) equation. The proposed model of innovative development is studied by simulation and the method of enumeration of the domains of feasible controls with a constant step. The main contribution of the paper consists in comparative analysis of efficiency of the methods of hierarchical control (compulsion or impulsion) for information structures of Stackelberg or Germeier (four structures) by means of the indices of system compatibility. The proposed model is a universal one and can be used for a scientifically grounded support of the programs of innovative development of any economic firm. The features of a specific company are considered in the process of model identification (a determination of the specific classes of model functions and numerical values of its parameters) which forms a separate complex problem and requires an analysis of the statistical data and expert estimations. The following assumptions about information rules of the hierarchical game are accepted: all players use open-loop strategies; the leader chooses and reports to the followers some values of administrative (compulsion) or economic (impulsion) control variables which can be only functions of time (Stackelberg games) or depend also on the followers’ controls (Germeier games); given the leader’s strategies all followers simultaneously and independently choose their strategies that gives a Nash equilibrium in the followers’ game. For a finite number of iterations the proposed algorithm of simulation modeling allows to build an approximate solution of the model or to conclude that it doesn’t exist. A reliability and efficiency of the proposed algorithm follow from the properties of the scenario method and the method of a direct ordered enumeration with a constant step. Some comprehensive conclusions about the comparative efficiency of methods of hierarchical control of innovations are received.
-
The discrete form of the equations in the theory of the shifting mode of reproduction with different variants of financial flows
Computer Research and Modeling, 2016, v. 8, no. 5, pp. 803-815Views (last year): 1. Citations: 4 (RSCI).Different versions of the shifting mode of reproduction models describe set of the macroeconomic production subsystems interacting with each other, to each of which there corresponds the household. These subsystems differ among themselves on age of the fixed capital used by them as they alternately stop production for its updating by own forces (for repair of the equipment and for introduction of the innovations increasing production efficiency). It essentially distinguishes this type of models from the models describing the mode of joint reproduction in case of which updating of fixed capital and production of a product happen simultaneously. Models of the shifting mode of reproduction allow to describe mechanisms of such phenomena as cash circulations and amortization, and also to describe different types of monetary policy, allow to interpret mechanisms of economic growth in a new way. Unlike many other macroeconomic models, model of this class in which the subsystems competing among themselves serially get an advantage in comparison with the others because of updating, essentially not equilibrium. They were originally described as a systems of ordinary differential equations with abruptly varying coefficients. In the numerical calculations which were carried out for these systems depending on parameter values and initial conditions both regular, and not regular dynamics was revealed. This paper shows that the simplest versions of this model without the use of additional approximations can be represented in a discrete form (in the form of non-linear mappings) with different variants (continuous and discrete) financial flows between subsystems (interpreted as wages and subsidies). This form of representation is more convenient for receipt of analytical results as well as for a more economical and accurate numerical calculations. In particular, its use allowed to determine the entry conditions corresponding to coordinated and sustained economic growth without systematic lagging in production of a product of one subsystems from others.
-
Analysis of point model of fibrin polymerization
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 247-258Views (last year): 8.Functional modeling of blood clotting and fibrin-polymer mesh formation is of a significant value for medical and biophysics applications. Despite the fact of some discrepancies present in simplified functional models their results are of the great interest for the experimental science as a handy tool of the analysis for research planning, data processing and verification. Under conditions of the good correspondence to the experiment functional models can be used as an element of the medical treatment methods and biophysical technologies. The aim of the paper in hand is a modeling of a point system of the fibrin-polymer formation as a multistage polymerization process with a sol-gel transition at the final stage. Complex-value Rosenbroke method of second order (CROS) used for computational experiments. The results of computational experiments are presented and discussed. It was shown that in the physiological range of the model coefficients there is a lag period of approximately 20 seconds between initiation of the reaction and fibrin gel appearance which fits well experimental observations of fibrin polymerization dynamics. The possibility of a number of the consequent $(n = 1–3)$ sol-gel transitions demonstrated as well. Such a specific behavior is a consequence of multistage nature of fibrin polymerization process. At the final stage the solution of fibrin oligomers of length 10 can reach a semidilute state, leading to an extremely fast gel formation controlled by oligomers’ rotational diffusion. Otherwise, if the semidilute state is not reached the gel formation is controlled by significantly slower process of translational diffusion. Such a duality in the sol-gel transition led authors to necessity of introduction of a switch-function in an equation for fibrin-polymer formation kinetics. Consequent polymerization events can correspond to experimental systems where fibrin mesh formed gets withdrawn from the volume by some physical process like precipitation. The sensitivity analysis of presented system shows that dependence on the first stage polymerization reaction constant is non-trivial.
-
The tests for checking of the parallel organization in logical calculation which are based on the algebra and the automats
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 621-638Views (last year): 14. Citations: 1 (RSCI).We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.
-
Estimation of natural frequencies of pure bending vibrations of composite nonlinearly elastic beams and circular plates
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 945-953Views (last year): 14.In the paper, it is represented a linearization method for the stress-strain curves of nonlinearly deformable beams and circular plates in order to generalize the pure bending vibration equations. It is considered composite, on average isotropic prismatic beams of a constant rectangular cross-section and circular plates of a constant thickness made of nonlinearly elastic materials. The technique consists in determining the approximate Young’s moduli from the initial stress-strain state of beam and plate subjected to the action of the bending moment.
The paper proposes two criteria for linearization: the equality of the specific potential energy of deformation and the minimization of the standard deviation in the state equation approximation. The method allows obtaining in the closed form the estimated value of the natural frequencies of layered and structurally heterogeneous, on average isotropic nonlinearly elastic beams and circular plates. This makes it possible to significantly reduce the resources in the vibration analysis and modeling of these structural elements. In addition, the paper shows that the proposed linearization criteria allow to estimate the natural frequencies with the same accuracy.
Since in the general case even isotropic materials exhibit different resistance to tension and compression, it is considered the piecewise-linear Prandtl’s diagrams with proportionality limits and tangential Young’s moduli that differ under tension and compression as the stress-strain curves of the composite material components. As parameters of the stress-strain curve, it is considered the effective Voigt’s characteristics (under the hypothesis of strain homogeneity) for a longitudinally layered material structure; the effective Reuss’ characteristics (under the hypothesis of strain homogeneity) for a transversely layered beam and an axially laminated plate. In addition, the effective Young’s moduli and the proportionality limits, obtained by the author’s homogenization method, are given for a structurally heterogeneous, on average isotropic material. As an example, it is calculated the natural frequencies of two-phase beams depending on the component concentrations.
-
An analysis of interatomic potentials for vacancy diffusion simulation in concentrated Fe–Cr alloys
Computer Research and Modeling, 2018, v. 10, no. 1, pp. 87-101Views (last year): 14.The study tested correctness of three interatomic potentials available in the scientific literature in reproducing a vacancy diffusion in concentrated Fe–Cr alloys by molecular dynamic simulations. It was necessary for further detailed study of vacancy diffusion mechanism in these alloys with Cr content 5–25 at.% at temperatures in the range of 600–1000 K. The analysis of the potentials was performed on alloys models with Cr content 10, 20, 50 at.%. The consideration of the model with chromium content 50 at.% was necessary for further study of diffusion processes in chromium-rich precipitates in these alloys. The formation energies and the atomic mobilities of iron and chromium atoms were calculated and analyzed in the alloys via an artificially created vacancy for all used potentials. A time dependence of mean squared displacement of atoms was chosen as а main characteristic for the analysis of atomic mobilities. The simulation of vacancy formation energies didn’t show qualitative differences between the investigated potentials. The study of atomic mobilities showed a poor reproduction of vacancy diffusion in the simulated alloys by the concentration-dependent model (CDM), which strongly underestimated the mobility of chromium atoms via vacancy in the investigated range of temperature and chromium content. Also it was established, that the two-band model (2BM) of potentials in its original and modified version doesn’t have such drawbacks. This allows one to use these potentials in simulations of vacancy diffusion mechanism in Fe–Cr alloys. Both potentials show a significant dependence of the ratio of chromium and iron atomic mobilities on temperature and Cr content in simulated alloys. The quantitative data of the diffusion coefficients of atoms obtained by these potentials also differ significantly.
-
Searching stochastic equilibria in transport networks by universal primal-dual gradient method
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345Views (last year): 28.We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.
-
Machine learning interpretation of inter-well radiowave survey data
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 675-684Views (last year): 3.Traditional geological search methods going to be ineffective. The exploration depth of kimberlite bodies and ore deposits has increased significantly. The only direct exploration method is to drill a system of wells to the depths that provide access to the enclosing rocks. Due to the high cost of drilling, the role of inter-well survey methods has increased. They allows to increase the mean well spacing without significantly reducing the kimberlite or ore body missing probability. The method of inter-well radio wave survey is effective to search for high contrast conductivity objects. The physics of the method based on the dependence of the electromagnetic wave propagation on the propagation medium conductivity. The source and receiver of electromagnetic radiation is an electric dipole, they are placed in adjacent wells. The distance between the source and receiver is known. Therefore we could estimate the medium absorption coefficient by the rate of radio wave amplitude decrease. Low electrical resistance rocks corresponds to high absorption of radio waves. The inter-well measurement data allows to estimate an effective electrical resistance (or conductivity) of the rock. Typically, the source and receiver are immersed in adjacent wells synchronously. The value of the of the electric field amplitude measured at the receiver site allows to estimate the average value of the attenuation coefficient on the line connecting the source and receiver. The measurements are taken during stops, approximately every 5 m. The distance between stops is much less than the distance between adjacent wells. This leads to significant spatial anisotropy in the measured data distribution. Drill grid covers a large area, and our point is to build a three-dimensional model of the distribution of the electrical properties of the inter-well space throughout the whole area. The anisotropy of spatial distribution makes hard to the use of standard geostatistics approach. To build a three-dimensional model of attenuation coefficient, we used one of machine learning theory methods, the method of nearest neighbors. In this method, the value of the absorption coefficient at a given point is calculated by $k$ nearest measurements. The number $k$ should be determined from additional reasons. The spatial distribution anisotropy effect can be reduced by changing the spatial scale in the horizontal direction. The scale factor $\lambda$ is one yet external parameter of the problem. To select the parameters $k$ and $\lambda$ values we used the determination coefficient. To demonstrate the absorption coefficient three-dimensional image construction we apply the procedure to the inter-well radio wave survey data. The data was obtained at one of the sites in Yakutia.
-
Optimal fishing and evolution of fish migration routes
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 879-893A new discrete ecological-evolutionary mathematical model is presented, in which the search mechanisms for evolutionarily stable migration routes of fish populations are implemented. The proposed adaptive designs have a small dimension, and therefore have high speed. This allows carrying out calculations on long-term perspective for an acceptable machine time. Both geometric approaches of nonlinear analysis and computer “asymptotic” methods were used in the study of stability. The migration dynamics of the fish population is described by a certain Markov matrix, which can change during evolution. The “basis” matrices are selected in the family of Markov matrices (of fixed dimension), which are used to generate migration routes of mutant. A promising direction of the evolution of the spatial behavior of fish is revealed for a given fishery and food supply, as a result of competition of the initial population with mutants. This model was applied to solve the problem of optimal catch for the long term, provided that the reservoir is divided into two parts, each of which has its own owner. Dynamic programming is used, based on the construction of the Bellman function, when solving optimization problems. A paradoxical strategy of “luring” was discovered, when one of the participants in the fishery temporarily reduces the catch in its water area. In this case, the migrating fish spends more time in this area (on condition of equal food supply). This route is evolutionarily fixes and does not change even after the resumption of fishing in the area. The second participant in the fishery can restore the status quo by applying “luring” to its part of the water area. Endless sequence of “luring” arises as a kind of game “giveaway”. A new effective concept has been introduced — the internal price of the fish population, depending on the zone of the reservoir. In fact, these prices are Bellman's private derivatives, and can be used as a tax on caught fish. In this case, the problem of long-term fishing is reduced to solving the problem of one-year optimization.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




