Most viewed papers

Most cited papers (RSCI)
Найдено статей: 666
  1. Malkov S.Yu.
    World dynamics patterns modeling
    Computer Research and Modeling, 2017, v. 9, no. 3, pp. 419-432

    In the article is carried out the analysis of historical process with the use of methods of synergetics (science about the nonlinear developing systems in nature and the society), developed in the works of D. S. Chernavskii in connection with to economic and social systems. It is shown that social self-organizing depending on conditions leads to the formation of both the societies with the strong internal competition (Y-structures) and cooperative type societies (X-structures). Y-structures are characteristic for the countries of the West, X-structure are characteristic for the countries of the East. It is shown that in XIX and in XX centuries occurred accelerated shaping and strengthening of Y-structures. However, at present world system entered into the period of serious structural changes in the economic, political, ideological spheres: the domination of Y-structures concludes. Are examined the possible ways of further development of the world system, connected with change in the regimes of self-organizing and limitation of internal competition. This passage will be prolonged and complex. Under these conditions it will objectively grow the value of the civilizational experience of Russia, on basis of which was formed combined type social system. It is shown that ultimately inevitable the passage from the present do-mination of Y-structures to the absolutely new global system, whose stability will be based on the new ideology, the new spirituality (i.e., new “conditional information” according D. S. Chernavskii), which makes a turn from the principles of competition to the principles of collaboration.

    Views (last year): 17.
  2. Kulikov Y.M., Son E.E.
    CABARET scheme implementation for free shear layer modeling
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 881-903

    In present paper we reexamine the properties of CABARET numerical scheme formulated for a weakly compressible fluid flow basing the results of free shear layer modeling. Kelvin–Helmholtz instability and successive generation of two-dimensional turbulence provide a wide field for a scheme analysis including temporal evolution of the integral energy and enstrophy curves, the vorticity patterns and energy spectra, as well as the dispersion relation for the instability increment. The most part of calculations is performed for Reynolds number $\text{Re} = 4 \times 10^5$ for square grids sequentially refined in the range of $128^2-2048^2$ nodes. An attention is paid to the problem of underresolved layers generating a spurious vortex during the vorticity layers roll-up. This phenomenon takes place only on a coarse grid with $128^2$ nodes, while the fully regularized evolution pattern of vorticity appears only when approaching $1024^2$-node grid. We also discuss the vorticity resolution properties of grids used with respect to dimensional estimates for the eddies at the borders of the inertial interval, showing that the available range of grids appears to be sufficient for a good resolution of small–scale vorticity patches. Nevertheless, we claim for the convergence achieved for the domains occupied by large-scale structures.

    The generated turbulence evolution is consistent with theoretical concepts imposing the emergence of large vortices, which collect all the kinetic energy of motion, and solitary small-scale eddies. The latter resemble the coherent structures surviving in the filamentation process and almost noninteracting with other scales. The dissipative characteristics of numerical method employed are discussed in terms of kinetic energy dissipation rate calculated directly and basing theoretical laws for incompressible (via enstrophy curves) and compressible (with respect to the strain rate tensor and dilatation) fluid models. The asymptotic behavior of the kinetic energy and enstrophy cascades comply with two-dimensional turbulence laws $E(k) \propto k^{−3}, \omega^2(k) \propto k^{−1}$. Considering the instability increment as a function of dimensionless wave number shows a good agreement with other papers, however, commonly used method of instability growth rate calculation is not always accurate, so some modification is proposed. Thus, the implemented CABARET scheme possessing remarkably small numerical dissipation and good vorticity resolution is quite competitive approach compared to other high-order accuracy methods

    Views (last year): 17.
  3. Minkevich I.G.
    The effect of cell metabolism on biomass yield during the growth on various substrates
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 993-1014

    Bioenergetic regularities determining the maximal biomass yield in aerobic microbial growth on various substrates have been considered. The approach is based on the method of mass-energy balance and application of GenMetPath computer program package. An equation system describing the balances of quantities of 1) metabolite reductivity and 2) high-energy bonds formed and expended has been formulated. In order to formulate the system, the whole metabolism is subdivided into constructive and energetic partial metabolisms. The constructive metabolism is, in turn, subdivided into two parts: forward and standard. The latter subdivision is based on the choice of nodal metabolites. The forward constructive metabolism is substantially dependent on growth substrate: it converts the substrate into the standard set of nodal metabolites. The latter is, then, converted into biomass macromolecules by the standard constructive metabolism which is the same on various substrates. Variations of flows via nodal metabolites are shown to exert minor effects on the standard constructive metabolism. As a separate case, the growth on substrates requiring the participation of oxygenases and/or oxidase is considered. The bioenergetic characteristics of the standard constructive metabolism are found from a large amount of data for the growth of various organisms on glucose. The described approach can be used for prediction of biomass growth yield on substrates with known reactions of their primary metabolization. As an example, the growth of a yeast culture on ethanol has been considered. The value of maximal growth yield predicted by the method described here showed very good consistency with the value found experimentally.

    Views (last year): 17.
  4. The paper presents a physico-mathematical model of the perturbed region formed in the lower D-layer of the ionosphere under the action of directed radio emission flux from a terrestrial stand of the megahertz frequency range, obtained as a result of comprehensive theoretical studies. The model is based on the consideration of a wide range of kinetic processes taking into account their nonequilibrium and in the two-temperature approximation for describing the transformation of the radio beam energy absorbed by electrons. The initial data on radio emission achieved by the most powerful radio-heating stands are taken in the paper. Their basic characteristics and principles of functioning, and features of the altitude distribution of the absorbed electromagnetic energy of the radio beam are briefly described. The paper presents the decisive role of the D-layer of the ionosphere in the absorption of the energy of the radio beam. On the basis of theoretical analysis, analytical expressions are obtained for the contribution of various inelastic processes to the distribution of the absorbed energy, which makes it possible to correctly describe the contribution of each of the processes considered. The work considers more than 60 components. The change of the component concentration describe about 160 reactions. All the reactions are divided into five groups according to their physical content: ionization-chemical block, excitation block of metastable electronic states, cluster block, excitation block of vibrational states and block of impurities. Blocks are interrelated and can be calculated both jointly and separately. The paper presents the behavior of the parameters of the perturbed region in daytime and nighttime conditions is significantly different at the same radio flux density: under day conditions, the maximum electron concentration and temperature are at an altitude of ~45–55 km; in night ~80 km, with the temperature of heavy particles rapidly increasing, which leads to the occurrence of a gas-dynamic flow. Therefore, a special numerical algorithm are developed to solve two basic problems: kinetic and gas dynamic. Based on the altitude and temporal behavior of concentrations and temperatures, the algorithm makes it possible to determine the ionization and emission of the ionosphere in the visible and infrared spectral range, which makes it possible to evaluate the influence of the perturbed region on radio engineering and optoelectronic devices used in space technology.

    Views (last year): 17.
  5. Kalmykov L.V., Kalmykov V.L.
    Investigation of individual-based mechanisms of single-species population dynamics by logical deterministic cellular automata
    Computer Research and Modeling, 2015, v. 7, no. 6, pp. 1279-1293

    Investigation of logical deterministic cellular automata models of population dynamics allows to reveal detailed individual-based mechanisms. The search for such mechanisms is important in connection with ecological problems caused by overexploitation of natural resources, environmental pollution and climate change. Classical models of population dynamics have the phenomenological nature, as they are “black boxes”. Phenomenological models fundamentally complicate research of detailed mechanisms of ecosystem functioning. We have investigated the role of fecundity and duration of resources regeneration in mechanisms of population growth using four models of ecosystem with one species. These models are logical deterministic cellular automata and are based on physical axiomatics of excitable medium with regeneration. We have modeled catastrophic death of population arising from increasing of resources regeneration duration. It has been shown that greater fecundity accelerates population extinction. The investigated mechanisms are important for understanding mechanisms of sustainability of ecosystems and biodiversity conservation. Prospects of the presented modeling approach as a method of transparent multilevel modeling of complex systems are discussed.

    Views (last year): 16. Citations: 3 (RSCI).
  6. Golov A.V., Simakov S.S.
    Mathematical model of respiratory regulation during hypoxia and hypercapnia
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 297-310

    Transport of respiratory gases by respiratory and circulatory systems is one of the most important processes associated with living conditions of the human body. Significant and/or long-term deviations of oxygen and carbon dioxide concentrations from the normal values in blood can be a reason of significant pathological changes with irreversible consequences: lack of oxygen (hypoxia and ischemic events), the change in the acidbase balance of blood (acidosis or alkalosis), and others. In the context of a changing external environment and internal conditions of the body the action of its regulatory systems aimed at maintaining homeostasis. One of the major mechanisms for maintaining concentrations (partial pressures) of oxygen and carbon dioxide in the blood at a normal level is the regulation of minute ventilation, respiratory rate and depth of respiration, which is caused by the activity of the central and peripheral regulators.

    In this paper we propose a mathematical model of the regulation of pulmonary ventilation parameter. The model is used to calculate the minute ventilation adaptation during hypoxia and hypercapnia. The model is developed using a single-component model of the lungs, and biochemical equilibrium conditions of oxygen and carbon dioxide in the blood and the alveolar lung volume. A comparison with laboratory data is performed during hypoxia and hypercapnia. Analysis of the results shows that the model reproduces the dynamics of minute ventilation during hypercapnia with sufficient accuracy. Another result is that more accurate model of regulation of minute ventilation during hypoxia should be developed. The factors preventing from satisfactory accuracy are analysed in the final section.

    Respiratory function is one of the main limiting factors of the organism during intense physical activities. Thus, it is important characteristic of high performance sport and extreme physical activity conditions. Therefore, the results of this study have significant application value in the field of mathematical modeling in sport. The considered conditions of hypoxia and hypercapnia are partly reproduce training at high altitude and at hypoxia conditions. The purpose of these conditions is to increase the level of hemoglobin in the blood of highly qualified athletes. These conditions are the only admitted by sport committees.

    Views (last year): 16.
  7. Malinetsky G.G.
    Youth. Eternity. Synergetics
    Computer Research and Modeling, 2017, v. 9, no. 3, pp. 361-378
    Views (last year): 16. Citations: 1 (RSCI).
  8. In memory of A. S. Kholodov
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 677-678
    Views (last year): 16.
  9. Shabanov A.E., Petrov M.N., Chikitkin A.V.
    A multilayer neural network for determination of particle size distribution in Dynamic Light Scattering problem
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 265-273

    Solution of Dynamic Light Scattering problem makes it possible to determine particle size distribution (PSD) from the spectrum of the intensity of scattered light. As a result of experiment, an intensity curve is obtained. The experimentally obtained spectrum of intensity is compared with the theoretically expected spectrum, which is the Lorentzian line. The main task is to determine on the basis of these data the relative concentrations of particles of each class presented in the solution. The article presents a method for constructing and using a neural network trained on synthetic data to determine PSD in a solution in the range of 1–500 nm. The neural network has a fully connected layer of 60 neurons with the RELU activation function at the output, a layer of 45 neurons and the same activation function, a dropout layer and 2 layers with 15 and 1 neurons (network output). The article describes how the network has been trained and tested on synthetic and experimental data. On the synthetic data, the standard deviation metric (rmse) gave a value of 1.3157 nm. Experimental data were obtained for particle sizes of 200 nm, 400 nm and a solution with representatives of both sizes. The results of the neural network and the classical linear methods are compared. The disadvantages of the classical methods are that it is difficult to determine the degree of regularization: too much regularization leads to the particle size distribution curves are much smoothed out, and weak regularization gives oscillating curves and low reliability of the results. The paper shows that the neural network gives a good prediction for particles with a large size. For small sizes, the prediction is worse, but the error quickly decreases as the particle size increases.

    Views (last year): 16.
  10. Koganov A.V., Rakcheeva T.A., Prikhodko D.I.
    Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327

    The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.

    Views (last year): 16.
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"