All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Component analysis of binary media using acoustic reflectoimpedancemetry
Computer Research and Modeling, 2015, v. 7, no. 2, pp. 301-313Citations: 4 (RSCI).A computer model of component analysis of binary media, based on application of a new method acoustic reflecto-impedancemetry and realized in graphic programming environment LabVIEW is considered. Prospects of metrological and instrumental provisions of experimental applications of the model are discussed.
-
Empirical testing of institutional matrices theory by data mining
Computer Research and Modeling, 2015, v. 7, no. 4, pp. 923-939The paper has a goal to identify a set of parameters of the environment and infrastructure with the most significant impact on institutional-matrices that dominate in different countries. Parameters of environmental conditions includes raw statistical indices, which were directly derived from the databases of open access, as well as complex integral indicators that were by method of principal components. Efficiency of discussed parameters in task of dominant institutional matrices type recognition (X or Y type) was evaluated by a number of methods based on machine learning. It was revealed that greatest informational content is associated with parameters characterizing risk of natural disasters, level of urbanization and the development of transport infrastructure, the monthly averages and seasonal variations of temperature and precipitation.
Keywords: institutional matrices theory, machine learning.Views (last year): 7. Citations: 13 (RSCI). -
The modeling of nonlinear pulse waves in elastic vessels using the Lattice Boltzmann method
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 707-722Views (last year): 2.In the present paper the application of the kinetic methods to the blood flow problems in elastic vessels is studied. The Lattice Boltzmann (LB) kinetic equation is applied. This model describes the discretized in space and time dynamics of particles traveling in a one-dimensional Cartesian lattice. At the limit of the small times between collisions LB models describe hydrodynamic equations which are equivalent to the Navier – Stokes for compressible if the considered flow is slow (small Mach number). If one formally changes in the resulting hydrodynamic equations the variables corresponding to density and sound wave velocity by luminal area and pulse wave velocity then a well-known 1D equations for the blood flow motion in elastic vessels are obtained for a particular case of constant pulse wave speed.
In reality the pulse wave velocity is a function of luminal area. Here an interesting analogy is observed: the equation of state (which defines sound wave velocity) becomes pressure-area relation. Thus, a generalization of the equation of state is needed. This procedure popular in the modeling of non-ideal gas and is performed using an introduction of a virtual force. This allows to model arbitrary pressure-area dependence in the resulting hemodynamic equations.
Two test case problems are considered. In the first problem a propagation of a sole nonlinear pulse wave is studied in the case of the Laplace pressure-area response. In the second problem the pulse wave dynamics is considered for a vessel bifurcation. The results show good precision in comparison with the data from literature.
-
Mathematical model of the biometric iris recognition system
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 629-639Automatic recognition of personal identity by biometric features is based on unique peculiarities or characteristics of people. Biometric identification process consist in making of reference templates and comparison with new input data. Iris pattern recognition algorithms presents high accuracy and low identification errors percent on practice. Iris pattern advantages over other biometric features are determined by its high degree of freedom (nearly 249), excessive density of unique features and constancy. High recognition reliability level is very important because it provides search in big databases. Unlike one-to-one check mode that is applicable only to small calculation count it allows to work in one-to-many identification mode. Every biometric identification system appears to be probabilistic and qualitative characteristics description utilizes such parameters as: recognition accuracy, false acceptance rate and false rejection rate. These characteristics allows to compare identity recognition methods and asses the system performance under any circumstances. This article explains the mathematical model of iris pattern biometric identification and its characteristics. Besides, there are analyzed results of comparison of model and real recognition process. To make such analysis there was carried out the review of existing iris pattern recognition methods based on different unique features vector. The Python-based software package is described below. It builds-up probabilistic distributions and generates large test data sets. Such data sets can be also used to educate the identification decision making neural network. Furthermore, synergy algorithm of several iris pattern identification methods was suggested to increase qualitative characteristics of system in comparison with the use of each method separately.
-
Mathematical and computational problems associated with the formation of structures in complex systems
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 805-815In this paper, the system of equations of magnetic hydrodynamics (MHD) is considered. The exact solutions found describe fluid flows in a porous medium and are related to the development of a core simulator and are aimed at creating a domestic technology «digital deposit» and the tasks of controlling the parameters of incompressible fluid. The central problem associated with the use of computer technology is large-dimensional grid approximations and high-performance supercomputers with a large number of parallel microprocessors. Kinetic methods for solving differential equations and methods for «gluing» exact solutions on coarse grids are being developed as possible alternatives to large-dimensional grid approximations. A comparative analysis of the efficiency of computing systems allows us to conclude that it is necessary to develop the organization of calculations based on integer arithmetic in combination with universal approximate methods. A class of exact solutions of the Navier – Stokes system is proposed, describing three-dimensional flows for an incompressible fluid, as well as exact solutions of nonstationary three-dimensional magnetic hydrodynamics. These solutions are important for practical problems of controlled dynamics of mineralized fluids, as well as for creating test libraries for verification of approximate methods. A number of phenomena associated with the formation of macroscopic structures due to the high intensity of interaction of elements of spatially homogeneous systems, as well as their occurrence due to linear spatial transfer in spatially inhomogeneous systems, are highlighted. It is fundamental that the emergence of structures is a consequence of the discontinuity of operators in the norms of conservation laws. The most developed and universal is the theory of computational methods for linear problems. Therefore, from this point of view, the procedures of «immersion» of nonlinear problems into general linear classes by changing the initial dimension of the description and expanding the functional spaces are important. Identification of functional solutions with functions makes it possible to calculate integral averages of an unknown, but at the same time its nonlinear superpositions, generally speaking, are not weak limits of nonlinear superpositions of approximations of the method, i.e. there are functional solutions that are not generalized in the sense of S. L. Sobolev.
-
Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.
The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.
The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.
The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.
All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.
-
Simulation results of field experiments on the creation of updrafts for the development of artificial clouds and precipitation
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 941-956A promising method of increasing precipitation in arid climates is the method of creating a vertical high-temperature jet seeded by hygroscopic aerosol. Such an installation makes it possible to create artificial clouds with the possibility of precipitation formation in a cloudless atmosphere, unlike traditional methods of artificial precipitation enhancement, which provide for increasing the efficiency of precipitation formation only in natural clouds by seeding them with nuclei of crystallization and condensation. To increase the power of the jet, calcium chloride, carbamide, salt in the form of a coarse aerosol, as well as NaCl/TiO2 core/shell novel nanopowder, which is capable of condensing much more water vapor than the listed types of aerosols, are added. Dispersed inclusions in the jet are also centers of crystallization and condensation in the created cloud to increase the possibility of precipitation. To simulate convective flows in the atmosphere, a mathematical model of FlowVision large-scale atmospheric flows is used, the solution of the equations of motion, energy and mass transfer is carried out in relative variables. The statement of the problem is divided into two parts: the initial jet model and the FlowVision large-scale atmospheric model. The lower region, where the initial high-speed jet flows, is calculated using a compressible formulation with the solution of the energy equation with respect to the total enthalpy. This division of the problem into two separate subdomains is necessary in order to correctly carry out the numerical calculation of the initial turbulent jet at high velocity (M > 0.3). The main mathematical dependencies of the model are given. Numerical experiments were carried out using the presented model, experimental data from field tests of the installation for creating artificial clouds were taken for the initial data. A good agreement with the experiment is obtained: in 55% of the calculations carried out, the value of the vertical velocity at a height of 400 m (more than 2 m/s) and the height of the jet rise (more than 600 m) is within an deviation of 30% of the experimental characteristics, and in 30% of the calculations it is completely consistent with the experiment. The results of numerical simulation allow evaluating the possibility of using the high-speed jet method to stimulate artificial updrafts and to create precipitation. The calculations were carried out using FlowVision CFD software on SUSU Tornado supercomputer.
Keywords: artificial clouds, numerical simulation, CFD, artificial precipitation, meteorology, jet, meteotron. -
Determination of post-reconstruction correction factors for quantitative assessment of pathological bone lesions using gamma emission tomography
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 677-696In single-photon emission computed tomography (SPECT), patients with bone disorders receive a radiopharmaceutical (RP) that accumulates selectively in pathological lesions. Accurate quantification of RP uptake plays a critical role in disease staging, prognosis, and the development of personalized treatment strategies. Traditionally, the accuracy of quantitative assessment is evaluated through in vitro clinical trials using the standardized physical NEMA IEC phantom, which contains six spheres simulating lesions of various sizes. However, such experiments are limited by high costs and radiation exposure to researchers. This study proposes an alternative in silico approach based on numerical simulation using a digital twin of the NEMA IEC phantom. The computational framework allows for extensive testing under varying conditions without physical constraints. Analogous to clinical protocols, we calculated the recovery coefficient (RCmax), defined as the ratio of the maximum activity in a lesion to its known true value. The simulation settings were tailored to clinical SPECT/CT protocols involving 99mTc for patients with bone-related diseases. For the first time, we systematically analyzed the impact of lesion-to-background ratios and post-reconstruction filtering on RCmax values. Numerical experiments revealed the presence of edge artifacts in reconstructed lesion images, consistent with those observed in both real NEMA IEC phantom studies and patient scans. These artifacts introduce instability into the iterative reconstruction process and lead to errors in activity quantification. Our results demonstrate that post-filtering helps suppress edge artifacts and stabilizes the solution. However, it also significantly underestimates activity in small lesions. To address this issue, we introduce post-reconstruction correction factors derived from our simulations to improve the accuracy of quantification in lesions smaller than 20 mm in diameter.
-
Modeling of sand-gravel bed evolution in one-dimension
Computer Research and Modeling, 2015, v. 7, no. 2, pp. 315-328In the paper the model for a one-dimensional non-equilibrium riverbed process is proposed. The model takes into account the suspended and bed-load sediment transport. The bed-load transport is determined by using the original formula. This formula was derived from the thin bottom layer motion equation. The formula doesn’t contain new phenomenological parameters and takes into account the influence of bed slope, granulometric and physical mechanical parameters on the bed-load transport. A number of the model test problems are solved for the verification of the proposed mathematical model. The comparison of the calculation results with the established experimental data and the results of other authors is made. It was shown, that the obtained results have a good agreement with the experimental data in spite of the relative simplicity of the proposed mathematical model.
-
Detection of influence of upper working roll’s vibrayion on thickness of sheet at cold rolling with the help of DEFORM-3D software
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 111-116Views (last year): 12. Citations: 1 (RSCI).Technical diagnosis’ current trends are connected to application of FEM computer simulation, which allows, to some extent, replace real experiments, reduce costs for investigation and minimize risks. Computer simulation, just at the stage of research and development, allows carrying out of diagnostics of equipment to detect permissible fluctuations of parameters of equipment’s work. Peculiarity of diagnosis of rolling equipment is that functioning of rolling equipment is directly tied with manufacturing of product with required quality, including accuracy. At that design of techniques of technical diagnosis and diagnostical modelling is very important. Computer simulation of cold rolling of strip was carried out. At that upper working roll was doing vibrations in horizontal direction according with published data of experiments on continuous 1700 rolling mill. Vibration of working roll in a stand appeared due to gap between roll’s craft and guide in a stand and led to periodical fluctuations of strip’s thickness. After computer simulation with the help of DEFORM software strip with longitudinal and transversal thickness variation was gotten. Visualization of strip’s geometrical parameters, according with simulation data, corresponded to type of inhomogeneity of surface of strip rolled in real. Further analysis of thickness variation was done in order to identify, on the basis of simulation, sources of periodical components of strip’s thickness, whose reasons are malfunctions of equipment. Advantage of computer simulation while searching the sources of forming of thickness variation is that different hypothesis concerning thickness formations may be tested without conducting real experiments and costs of different types may be reduced. Moreover, while simulation, initial strip’s thickness will not have fluctuations as opposed to industrial or laboratorial experiments. On the basis of spectral analysis of random process, it was established that frequency of changing of strip’s thickness after rolling in one stand coincides with frequency of working roll’s vibration. Results of computer simulation correlate with results of the researches for 1700 mill. Therefore, opportunity to apply computer simulation to find reasons of formation of thickness variation of strip on the industrial rolling mill is shown.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




