All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Modeling of H2-permeability of alloys for gas separation membranes
Computer Research and Modeling, 2016, v. 8, no. 1, pp. 121-135Views (last year): 1. Citations: 7 (RSCI).High-purity hydrogen is required for clean energy and a variety of chemical technology processes. A considerable part of hydrogen is to be obtained by methane conversion. Different alloys, which may be wellsuited for use in gas-separation plants, were investigated by measuring specific hydrogen permeability. One had to estimate the parameters of diffusion and sorption to numerically model the different scenarios and experimental conditions of the material usage (including extreme ones), and identify the limiting factors. This paper presents a nonlinear model of hydrogen permeability in accordance with the specifics of the experiment, the numerical method for solving the boundary-value problem, and the results of parametric identification for the alloy V85Ni15.
-
Numerical investigations of mixing non-isothermal streams of sodium coolant in T-branch
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 95-110Views (last year): 3.Numerical investigation of mixing non-isothermal streams of sodium coolant in a T-branch is carried out in the FlowVision CFD software. This study is aimed at argumentation of applicability of different approaches to prediction of oscillating behavior of the flow in the mixing zone and simulation of temperature pulsations. The following approaches are considered: URANS (Unsteady Reynolds Averaged Navier Stokers), LES (Large Eddy Simulation) and quasi-DNS (Direct Numerical Simulation). One of the main tasks of the work is detection of the advantages and drawbacks of the aforementioned approaches.
Numerical investigation of temperature pulsations, arising in the liquid and T-branch walls from the mixing of non-isothermal streams of sodium coolant was carried out within a mathematical model assuming that the flow is turbulent, the fluid density does not depend on pressure, and that heat exchange proceeds between the coolant and T-branch walls. Model LMS designed for modeling turbulent heat transfer was used in the calculations within URANS approach. The model allows calculation of the Prandtl number distribution over the computational domain.
Preliminary study was dedicated to estimation of the influence of computational grid on the development of oscillating flow and character of temperature pulsation within the aforementioned approaches. The study resulted in formulation of criteria for grid generation for each approach.
Then, calculations of three flow regimes have been carried out. The regimes differ by the ratios of the sodium mass flow rates and temperatures at the T-branch inlets. Each regime was calculated with use of the URANS, LES and quasi-DNS approaches.
At the final stage of the work analytical comparison of numerical and experimental data was performed. Advantages and drawbacks of each approach to simulation of mixing non-isothermal streams of sodium coolant in the T-branch are revealed and formulated.
It is shown that the URANS approach predicts the mean temperature distribution with a reasonable accuracy. It requires essentially less computational and time resources compared to the LES and DNS approaches. The drawback of this approach is that it does not reproduce pulsations of velocity, pressure and temperature.
The LES and DNS approaches also predict the mean temperature with a reasonable accuracy. They provide oscillating solutions. The obtained amplitudes of the temperature pulsations exceed the experimental ones. The spectral power densities in the check points inside the sodium flow agree well with the experimental data. However, the expenses of the computational and time resources essentially exceed those for the URANS approach in the performed numerical experiments: 350 times for LES and 1500 times for ·DNS.
-
The analysis of respiratory reactions of the person in the conditions of the changed gas environment on mathematical model
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 281-296Views (last year): 5.The aim of the work was to study and develop methods of forecasting the dynamics of the human respiratory reactions, based on mathematical modeling. To achieve this goal have been set and solved the following tasks: developed and justified the overall structure and formalized description of the model Respiro-reflex system; built and implemented the algorithm in software models of gas exchange of the body; computational experiments and checking the adequacy of the model-based Lite-ture data and our own experimental studies.
In this embodiment, a new comprehensive model entered partial model modified version of physicochemical properties and blood acid-base balance. In developing the model as the basis of a formalized description was based on the concept of separation of physiologically-fi system of regulation on active and passive subsystems regulation. Development of the model was carried out in stages. Integrated model of gas exchange consisted of the following special models: basic biophysical models of gas exchange system; model physicochemical properties and blood acid-base balance; passive mechanisms of gas exchange model developed on the basis of mass balance equations Grodinza F.; chemical regulation model developed on the basis of a multifactor model D. Gray.
For a software implementation of the model, calculations were made in MatLab programming environment. To solve the equations of the method of Runge–Kutta–Fehlberga. It is assumed that the model will be presented in the form of a computer research program, which allows implements vat various hypotheses about the mechanism of the observed processes. Calculate the expected value of the basic indicators of gas exchange under giperkap Britain and hypoxia. The results of calculations as the nature of, and quantity is good enough co-agree with the data obtained in the studies on the testers. The audit on Adek-vatnost confirmed that the error calculation is within error of copper-to-biological experiments. The model can be used in the theoretical prediction of the dynamics of the respiratory reactions of the human body in a changed atmosphere.
-
Signal and noise calculation at Rician data analysis by means of combining maximum likelihood technique and method of moments
Computer Research and Modeling, 2018, v. 10, no. 4, pp. 511-523Views (last year): 11.The paper develops a new mathematical method of the joint signal and noise calculation at the Rice statistical distribution based on combing the maximum likelihood method and the method of moments. The calculation of the sough-for values of signal and noise is implemented by processing the sampled measurements of the analyzed Rician signal’s amplitude. The explicit equations’ system has been obtained for required signal and noise parameters and the results of its numerical solution are provided confirming the efficiency of the proposed technique. It has been shown that solving the two-parameter task by means of the proposed technique does not lead to the increase of the volume of demanded calculative resources if compared with solving the task in one-parameter approximation. An analytical solution of the task has been obtained for the particular case of small value of the signal-to-noise ratio. The paper presents the investigation of the dependence of the sought for parameters estimation accuracy and dispersion on the quantity of measurements in experimental sample. According to the results of numerical experiments, the dispersion values of the estimated sought-for signal and noise parameters calculated by means of the proposed technique change in inverse proportion to the quantity of measurements in a sample. There has been implemented a comparison of the accuracy of the soughtfor Rician parameters’ estimation by means of the proposed technique and by earlier developed version of the method of moments. The problem having been considered in the paper is meaningful for the purposes of Rician data processing, in particular, at the systems of magnetic-resonance visualization, in devices of ultrasonic visualization, at optical signals’ analysis in range-measuring systems, at radar signals’ analysis, as well as at solving many other scientific and applied tasks that are adequately described by the Rice statistical model.
-
Numerical simulation of two-dimensional magnetic skyrmion structures
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1051-1061Magnetic systems, in which due to competition between the direct Heisenberg exchange and the Dzyaloshinskii –Moriya interaction, magnetic vortex structures — skyrmions appear, were studied using the Metropolis algorithm.
The conditions for the nucleation and stable existence of magnetic skyrmions in two-dimensional magnetic films in the frame of the classical Heisenberg model were considered in the article. A thermal stability of skyrmions in a magnetic film was studied. The processes of the formation of various states in the system at different values of external magnetic fields were considered, various phases into which the Heisenberg spin system passes were recognized. The authors identified seven phases: paramagnetic, spiral, labyrinth, spiralskyrmion, skyrmion, skyrmion-ferromagnetic and ferromagnetic phases, a detailed analysis of the configurations is given in the article.
Two phase diagrams were plotted: the first diagram shows the behavior of the system at a constant $D$ depending on the values of the external magnetic field and temperature $(T, B)$, the second one shows the change of the system configurations at a constant temperature $T$ depending on the magnitude of the Dzyaloshinskii – Moriya interaction and external magnetic field: $(D, B)$.
The data from these numerical experiments will be used in further studies to determine the model parameters of the system for the formation of a stable skyrmion state and to develop methods for controlling skyrmions in a magnetic film.
-
Models of population process with delay and the scenario for adaptive resistance to invasion
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 147-161Changes in abundance for emerging populations can develop according to several dynamic scenarios. After rapid biological invasions, the time factor for the development of a reaction from the biotic environment will become important. There are two classic experiments known in history with different endings of the confrontation of biological species. In Gause’s experiments with ciliates, the infused predator, after brief oscillations, completely destroyed its resource, so its $r$-parameter became excessive for new conditions. Its own reproductive activity was not regulated by additional factors and, as a result, became critical for the invader. In the experiments of the entomologist Uchida with parasitic wasps and their prey beetles, all species coexisted. In a situation where a population with a high reproductive potential is regulated by several natural enemies, interesting dynamic effects can occur that have been observed in phytophages in an evergreen forest in Australia. The competing parasitic hymenoptera create a delayed regulation system for rapidly multiplying psyllid pests, where a rapid increase in the psyllid population is allowed until the pest reaches its maximum number. A short maximum is followed by a rapid decline in numbers, but minimization does not become critical for the population. The paper proposes a phenomenological model based on a differential equation with a delay, which describes a scenario of adaptive regulation for a population with a high reproductive potential with an active, but with a delayed reaction with a threshold regulation of exposure. It is shown that the complication of the regulation function of biotic resistance in the model leads to the stabilization of the dynamics after the passage of the minimum number by the rapidly breeding species. For a flexible system, transitional regimes of growth and crisis lead to the search for a new equilibrium in the evolutionary confrontation.
-
Experimental comparison of PageRank vector calculation algorithms
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 369-379Finding PageRank vector is of great scientific and practical interest due to its applicability to modern search engines. Despite the fact that this problem is reduced to finding the eigenvector of the stochastic matrix $P$, the need for new algorithms is justified by a large size of the input data. To achieve no more than linear execution time, various randomized methods have been proposed, returning the expected result only with some probability close enough to one. We will consider two of them by reducing the problem of calculating the PageRank vector to the problem of finding equilibrium in an antagonistic matrix game, which is then solved using the Grigoriadis – Khachiyan algorithm. This implementation works effectively under the assumption of sparsity of the input matrix. As far as we know, there are no successful implementations of neither the Grigoriadis – Khachiyan algorithm nor its application to the task of calculating the PageRank vector. The purpose of this paper is to fill this gap. The article describes an algorithm giving pseudocode and some details of the implementation. In addition, it discusses another randomized method of calculating the PageRank vector, namely, Markov chain Monte Carlo (MCMC), in order to compare the results of these algorithms on matrices with different values of the spectral gap. The latter is of particular interest, since the magnitude of the spectral gap strongly affects the convergence rate of MCMC and does not affect the other two approaches at all. The comparison was carried out on two types of generated graphs: chains and $d$-dimensional cubes. The experiments, as predicted by the theory, demonstrated the effectiveness of the Grigoriadis – Khachiyan algorithm in comparison with MCMC for sparse graphs with a small spectral gap value. The written code is publicly available, so everyone can reproduce the results themselves or use this implementation for their own needs. The work has a purely practical orientation, no theoretical results were obtained.
-
Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.
The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.
The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.
The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.
All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.
-
Simulation results of field experiments on the creation of updrafts for the development of artificial clouds and precipitation
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 941-956A promising method of increasing precipitation in arid climates is the method of creating a vertical high-temperature jet seeded by hygroscopic aerosol. Such an installation makes it possible to create artificial clouds with the possibility of precipitation formation in a cloudless atmosphere, unlike traditional methods of artificial precipitation enhancement, which provide for increasing the efficiency of precipitation formation only in natural clouds by seeding them with nuclei of crystallization and condensation. To increase the power of the jet, calcium chloride, carbamide, salt in the form of a coarse aerosol, as well as NaCl/TiO2 core/shell novel nanopowder, which is capable of condensing much more water vapor than the listed types of aerosols, are added. Dispersed inclusions in the jet are also centers of crystallization and condensation in the created cloud to increase the possibility of precipitation. To simulate convective flows in the atmosphere, a mathematical model of FlowVision large-scale atmospheric flows is used, the solution of the equations of motion, energy and mass transfer is carried out in relative variables. The statement of the problem is divided into two parts: the initial jet model and the FlowVision large-scale atmospheric model. The lower region, where the initial high-speed jet flows, is calculated using a compressible formulation with the solution of the energy equation with respect to the total enthalpy. This division of the problem into two separate subdomains is necessary in order to correctly carry out the numerical calculation of the initial turbulent jet at high velocity (M > 0.3). The main mathematical dependencies of the model are given. Numerical experiments were carried out using the presented model, experimental data from field tests of the installation for creating artificial clouds were taken for the initial data. A good agreement with the experiment is obtained: in 55% of the calculations carried out, the value of the vertical velocity at a height of 400 m (more than 2 m/s) and the height of the jet rise (more than 600 m) is within an deviation of 30% of the experimental characteristics, and in 30% of the calculations it is completely consistent with the experiment. The results of numerical simulation allow evaluating the possibility of using the high-speed jet method to stimulate artificial updrafts and to create precipitation. The calculations were carried out using FlowVision CFD software on SUSU Tornado supercomputer.
Keywords: artificial clouds, numerical simulation, CFD, artificial precipitation, meteorology, jet, meteotron. -
Calibration of an elastostatic manipulator model using AI-based design of experiment
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1535-1553This paper demonstrates the advantages of using artificial intelligence algorithms for the design of experiment theory, which makes possible to improve the accuracy of parameter identification for an elastostatic robot model. Design of experiment for a robot consists of the optimal configuration-external force pairs for the identification algorithms and can be described by several main stages. At the first stage, an elastostatic model of the robot is created, taking into account all possible mechanical compliances. The second stage selects the objective function, which can be represented by both classical optimality criteria and criteria defined by the desired application of the robot. At the third stage the optimal measurement configurations are found using numerical optimization. The fourth stage measures the position of the robot body in the obtained configurations under the influence of an external force. At the last, fifth stage, the elastostatic parameters of the manipulator are identified based on the measured data.
The objective function required to finding the optimal configurations for industrial robot calibration is constrained by mechanical limits both on the part of the possible angles of rotation of the robot’s joints and on the part of the possible applied forces. The solution of this multidimensional and constrained problem is not simple, therefore it is proposed to use approaches based on artificial intelligence. To find the minimum of the objective function, the following methods, also sometimes called heuristics, were used: genetic algorithms, particle swarm optimization, simulated annealing algorithm, etc. The obtained results were analyzed in terms of the time required to obtain the configurations, the optimal value, as well as the final accuracy after applying the calibration. The comparison showed the advantages of the considered optimization techniques based on artificial intelligence over the classical methods of finding the optimal value. The results of this work allow us to reduce the time spent on calibration and increase the positioning accuracy of the robot’s end-effector after calibration for contact operations with high loads, such as machining and incremental forming.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"