All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Numerical simulation of fluid flow in a blood pump in the FlowVision software package
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1025-1038A numerical simulation of fluid flow in a blood pump was performed using the FlowVision software package. This test problem, provided by the Center for Devices and Radiological Health of the US. Food and Drug Administration, involved considering fluid flow according to several design modes. At the same time for each case of calculation a certain value of liquid flow rate and rotor speed was set. Necessary data for calculations in the form of exact geometry, flow conditions and fluid characteristics were provided to all research participants, who used different software packages for modeling. Numerical simulations were performed in FlowVision for six calculation modes with the Newtonian fluid and standard $k-\varepsilon$ turbulence model, in addition, the fifth mode with the $k-\omega$ SST turbulence model and with the Caro rheological fluid model were performed. In the first stage of the numerical simulation, the convergence over the mesh was investigated, on the basis of which a final mesh with a number of cells of the order of 6 million was chosen. Due to the large number of cells, in order to accelerate the study, part of the calculations was performed on the Lomonosov-2 cluster. As a result of numerical simulation, we obtained and analyzed values of pressure difference between inlet and outlet of the pump, velocity between rotor blades and in the area of diffuser, and also, we carried out visualization of velocity distribution in certain cross-sections. For all design modes there was compared the pressure difference received numerically with the experimental data, and for the fifth calculation mode there was also compared with the experiment by speed distribution between rotor blades and in the area of diffuser. Data analysis has shown good correlation of calculation results in FlowVision with experimental results and numerical simulation in other software packages. The results obtained in FlowVision for solving the US FDA test suggest that FlowVision software package can be used for solving a wide range of hemodynamic problems.
-
Study and optimization of wireless sensor network based on ZigBee protocol
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 855-869Views (last year): 5. Citations: 12 (RSCI).Algorithms of wireless sensor networks operation based on modified ZigBee/IEEE 802.15.4 protocol stack and problems of energy saving with simultaneous decrease of network latency are studied. Theoretical computations are given. Roles distribution and routers schedule assignment algorithms are described. Both results of experiments carried out with real devices and results of simulations with ns-2 (open-source network simulator) are given and analyzed.
-
Calculation of magnetic properties of nanostructured films by means of the parallel Monte-Carlo
Computer Research and Modeling, 2013, v. 5, no. 4, pp. 693-703Views (last year): 4. Citations: 1 (RSCI).Images of surface topography of ultrathin magnetic films have been used for Monte Carlo simulations in the framework of the ferromagnetic Ising model to study the hysteresis and thermal properties of nanomaterials. For high performance calculations was used super-scalable parallel algorithm for the finding of the equilibrium configuration. The changing of a distribution of spins on the surface during the reversal of the magnetization and the dynamics of nanodomain structure of thin magnetic films under the influence of changing external magnetic field was investigated.
-
Models of soil organic matter dynamics: problems and perspectives
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 391-399Soil as a complex multifunctional open system is one of the most difficult object for modeling. In spite of serious achievements in the soil system modeling, existed models do not reflect all aspects and processes of soil organic matter mineralization and humification. The problems and “hot spots” in the modeling of the dynamics of soil organic matter and biophylous elements were identified on a base of creation and wide implementation of ROMUL and EFIMOD models. The following aspects are discussed: further theoretical background; improving the structure of models; preparation and uncertainty of the initial data; inclusion of all soil biota (microorganisms, micro- and meso-fauna) as factors of humification; impact of soil mineralogy on C and N dynamics; hydro-thermal regime and organic matter distribution in whole soil profile; vertical and horizontal migration of soil organic matter. An effective feedback from modellers to experimentalists is necessary to solve the listed problems.
Keywords: mathematic model, soil organic matter.Views (last year): 2. Citations: 3 (RSCI). -
Comparative analysis of statistical methods of scientific publications classification in medicine
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 921-933In this paper the various methods of machine classification of scientific texts by thematic sections on the example of publications in specialized medical journals published by Springer are compared. The corpus of texts was studied in five sections: pharmacology/toxicology, cardiology, immunology, neurology and oncology. We considered both classification methods based on the analysis of annotations and keywords, and classification methods based on the processing of actual texts. Methods of Bayesian classification, reference vectors, and reference letter combinations were applied. It is shown that the method of classification with the best accuracy is based on creating a library of standards of letter trigrams that correspond to texts of a certain subject. It is turned out that for this corpus the Bayesian method gives an error of about 20%, the support vector machine has error of order 10%, and the proximity of the distribution of three-letter text to the standard theme gives an error of about 5%, which allows to rank these methods to the use of artificial intelligence in the task of text classification by industry specialties. It is important that the support vector method provides the same accuracy when analyzing annotations as when analyzing full texts, which is important for reducing the number of operations for large text corpus.
-
Assessing the validity of clustering of panel data by Monte Carlo methods (using as example the data of the Russian regional economy)
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1501-1513The paper considers a method for studying panel data based on the use of agglomerative hierarchical clustering — grouping objects based on the similarities and differences in their features into a hierarchy of clusters nested into each other. We used 2 alternative methods for calculating Euclidean distances between objects — the distance between the values averaged over observation interval, and the distance using data for all considered years. Three alternative methods for calculating the distances between clusters were compared. In the first case, the distance between the nearest elements from two clusters is considered to be distance between these clusters, in the second — the average over pairs of elements, in the third — the distance between the most distant elements. The efficiency of using two clustering quality indices, the Dunn and Silhouette index, was studied to select the optimal number of clusters and evaluate the statistical significance of the obtained solutions. The method of assessing statistical reliability of cluster structure consisted in comparing the quality of clustering on a real sample with the quality of clustering on artificially generated samples of panel data with the same number of objects, features and lengths of time series. Generation was made from a fixed probability distribution. At the same time, simulation methods imitating Gaussian white noise and random walk were used. Calculations with the Silhouette index showed that a random walk is characterized not only by spurious regression, but also by “spurious clustering”. Clustering was considered reliable for a given number of selected clusters if the index value on the real sample turned out to be greater than the value of the 95% quantile for artificial data. A set of time series of indicators characterizing production in the regions of the Russian Federation was used as a sample of real data. For these data only Silhouette shows reliable clustering at the level p < 0.05. Calculations also showed that index values for real data are generally closer to values for random walks than for white noise, but it have significant differences from both. Since three-dimensional feature space is used, the quality of clustering was also evaluated visually. Visually, one can distinguish clusters of points located close to each other, also distinguished as clusters by the applied hierarchical clustering algorithm.
-
Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.
-
Solution of optimization problem of wood fuel facility location by the thermal energy cost criterion
Computer Research and Modeling, 2012, v. 4, no. 3, pp. 651-659Views (last year): 5. Citations: 2 (RSCI).The paper contains a mathematical model for the optimal location of enterprises producing fuel from renewable wood waste for the regional distributed heating supply system. Optimization is based on total cost minimization of the end product – the thermal energy from wood fuel. A method for solving the problem is based on genetic algorithm. The paper also shows the practical results of the model by example of Udmurt Republic.
-
Applications of on-demand virtual clusters to high performance computing
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 511-516Views (last year): 1.Virtual machines are usually associated with an ability to create them on demand by calling web services, then these machines are used to deliver resident services to their clients; however, providing clients with an ability to run an arbitrary programme on the newly created machines is beyond their power. Such kind of usage is useful in a high performance computing environment where most of the resources are consumed by batch programmes and not by daemons or services. In this case a cluster of virtual machines is created on demand to run a distributed or parallel programme and to save its output to a network attached storage. Upon completion this cluster is destroyed and resources are released. With certain modifications this approach can be extended to interactively deliver computational resources to the user thus providing virtual desktop as a service. Experiments show that the process of creating virtual clusters on demand can be made efficient in both cases.
-
Forecasting the labor force dynamics in a multisectoral labor market
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 235-250The article considers the problem of forecasting the number of employed and unemployed persons in a multisectoral labor market using a balance mathematical model of labor force intersectoral dynamics.
The balance mathematical model makes it possible to calculate the values of intersectoral dynamics indicators using only statistical data on sectoral employment and unemployment provided by the Federal State Statistics Service. Intersectoral dynamics indicators of labor force calculated for several years in a row are used to build trends for each of these indicators. The found trends are used to calculation of forecasted intersectoral dynamics indicators of labor force. The sectoral employment and unemployment of researched multisectoral labor market is forecasted based on values these forecasted indicators.
The proposed approach was applied to forecast the employed persons in the economic sectors of the Russian Federation in 2011–2016. The following types of trends were used to describe changes of intersectoral dynamics indicators values: linear, non-linear, constant. The procedure for selecting trends is clearly demonstrated by the example of indicators that determine the labor force movements from the “Transport and communications” sector to the “Healthcare and social services” sector, as well as from the “Public administration and military security, social security” sector to the “Education” sector.
Several approaches to forecasting was compared: a) naive forecast, within which the labor market indicators was forecasted only using a constant trend; b) forecasting based on a balance model using only a constant trend for all intersectoral dynamics indicators of labor force; c) forecasting directly by the number employed persons in economic sectors using the types of trends considered in the article; d) forecasting based on a balance model with the trends choice for each intersectoral dynamics indicators of labor force.
The article shows that the use of a balance model provides a better forecast quality compared to forecasting directly by the number of employed persons. The use of trends in intersectoral dynamics indicators improves the quality of the forecast. The article also provides analysis examples of the multisectoral labor market in the Russian Federation. Using the balance model, the following information was obtained: the labor force flows distribution outgoing from concrete sectors by sectors of the economy; the sectoral structure of the labor force flows ingoing in concrete sectors. This information is not directly contained in the data provided by the Federal State Statistics Service.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"