All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Forecasting the global temperature increase for the XXI century by means of a simple statistical model
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 379-390Views (last year): 1.A simple statistical model is developed for the dynamics of the mean global annual temperature. The model combines the logarithmic effect of carbon dioxide concentration increase and the input by climatic cycles. Model parameters are determined from data of instrumental observations for 1850–2010. The model confirms the presence of climatic cycles with the period of 10.5 and 68.8 years in the global temperature dynamics. The trajectories of the global temperature changes for the XXI century are obtained under the scenarios of carbon dioxide concentration changes from the 5th IPCC Assessment Report. The comparison revealed that the global temperature trajectories from the Report are 0.9–1.8 °C above those obtained in the model.
-
Modernization as a global process: the experience of mathematical modeling
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 859-873The article analyzes empirical data on the long-term demographic and economic dynamics of the countries of the world for the period from the beginning of the 19th century to the present. Population and GDP of a number of countries of the world for the period 1500–2016 were selected as indicators characterizing the long-term demographic and economic dynamics of the countries of the world. Countries were chosen in such a way that they included representatives with different levels of development (developed and developing countries), as well as countries from different regions of the world (North America, South America, Europe, Asia, Africa). A specially developed mathematical model was used for modeling and data processing. The presented model is an autonomous system of differential equations that describes the processes of socio-economic modernization, including the process of transition from an agrarian society to an industrial and post-industrial one. The model contains the idea that the process of modernization begins with the emergence of an innovative sector in a traditional society, developing on the basis of new technologies. The population is gradually moving from the traditional sector to the innovation sector. Modernization is completed when most of the population moves to the innovation sector.
Statistical methods of data processing and Big Data methods, including hierarchical clustering were used. Using the developed algorithm based on the random descent method, the parameters of the model were identified and verified on the basis of empirical series, and the model was tested using statistical data reflecting the changes observed in developed and developing countries during the period of modernization taking place over the past centuries. Testing the model has demonstrated its high quality — the deviations of the calculated curves from statistical data are usually small and occur during periods of wars and economic crises. Thus, the analysis of statistical data on the long-term demographic and economic dynamics of the countries of the world made it possible to determine general patterns and formalize them in the form of a mathematical model. The model will be used to forecast demographic and economic dynamics in different countries of the world.
-
Calculation of transverse wave speed in preloaded fibres under an impact
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 887-897The paper considers the problem of transverse impact on a thin preloaded fiber. The commonly accepted theory of transverse impact on a thin fiber is based on the classical works of Rakhmatulin and Smith. The simple relations obtained from the Rakhmatulin – Smith theory are widely used in engineering practice. However, there are numerous evidences that experimental results may differ significantly from estimations based on these relations. A brief overview of the factors that cause the differences is given in this article.
This paper focuses on the shear wave velocity, as it is the only feature that can be directly observed and measured using high-speed cameras or similar methods. The influence of the fiber preload on the wave speed is considered. This factor is important, since it inevitably arises in the experimental results. The reliable fastening and precise positioning of the fiber during the experiments requires its preload. This work shows that the preload significantly affects the shear wave velocity in the impacted fiber.
Numerical calculations were performed for Kevlar 29 and Spectra 1000 yarns. Shear wave velocities are obtained for different levels of initial tension. A direct comparison of numerical results and analytical estimations with experimental data is presented. The speed of the transverse wave in free and preloaded fibers differed by a factor of two for the setup parameters considered. This fact demonstrates that measurements based on high-speed imaging and analysis of the observed shear waves should take into account the preload of the fibers.
This paper proposes a formula for a quick estimation of the shear wave velocity in preloaded fibers. The formula is obtained from the basic relations of the Rakhmatulin – Smith theory under the assumption of a large initial deformation of the fiber. The formula can give significantly better results than the classical approximation, this fact is demonstrated using the data for preloaded Kevlar 29 and Spectra 1000. The paper also shows that direct numerical calculation has better corresponding with the experimental data than any of the considered analytical estimations.
-
Efficient and error-free information hiding in the hybrid domain of digital images using metaheuristic optimization
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 197-210Data hiding in digital images is a promising direction of cybersecurity. Digital steganography methods provide imperceptible transmission of secret data over an open communication channel. The information embedding efficiency depends on the embedding imperceptibility, capacity, and robustness. These quality criteria are mutually inverse, and the improvement of one indicator usually leads to the deterioration of the others. A balance between them can be achieved using metaheuristic optimization. Metaheuristics are a class of optimization algorithms that find an optimal, or close to an optimal solution for a variety of problems, including those that are difficult to formalize, by simulating various natural processes, for example, the evolution of species or the behavior of animals. In this study, we propose an approach to data hiding in the hybrid spatial-frequency domain of digital images based on metaheuristic optimization. Changing a block of image pixels according to some change matrix is considered as an embedding operation. We select the change matrix adaptively for each block using metaheuristic optimization algorithms. In this study, we compare the performance of three metaheuristics such as genetic algorithm, particle swarm optimization, and differential evolution to find the best change matrix. Experimental results showed that the proposed approach provides high imperceptibility of embedding, high capacity, and error-free extraction of embedded information. At the same time, storage of change matrices for each block is not required for further data extraction. This improves user experience and reduces the chance of an attacker discovering the steganographic attachment. Metaheuristics provided an increase in imperceptibility indicator, estimated by the PSNR metric, and the capacity of the previous algorithm for embedding information into the coefficients of the discrete cosine transform using the QIM method [Evsutin, Melman, Meshcheryakov, 2021] by 26.02% and 30.18%, respectively, for the genetic algorithm, 26.01% and 19.39% for particle swarm optimization, 27.30% and 28.73% for differential evolution.
-
Numerical simulation of fluid flow in a blood pump in the FlowVision software package
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1025-1038A numerical simulation of fluid flow in a blood pump was performed using the FlowVision software package. This test problem, provided by the Center for Devices and Radiological Health of the US. Food and Drug Administration, involved considering fluid flow according to several design modes. At the same time for each case of calculation a certain value of liquid flow rate and rotor speed was set. Necessary data for calculations in the form of exact geometry, flow conditions and fluid characteristics were provided to all research participants, who used different software packages for modeling. Numerical simulations were performed in FlowVision for six calculation modes with the Newtonian fluid and standard $k-\varepsilon$ turbulence model, in addition, the fifth mode with the $k-\omega$ SST turbulence model and with the Caro rheological fluid model were performed. In the first stage of the numerical simulation, the convergence over the mesh was investigated, on the basis of which a final mesh with a number of cells of the order of 6 million was chosen. Due to the large number of cells, in order to accelerate the study, part of the calculations was performed on the Lomonosov-2 cluster. As a result of numerical simulation, we obtained and analyzed values of pressure difference between inlet and outlet of the pump, velocity between rotor blades and in the area of diffuser, and also, we carried out visualization of velocity distribution in certain cross-sections. For all design modes there was compared the pressure difference received numerically with the experimental data, and for the fifth calculation mode there was also compared with the experiment by speed distribution between rotor blades and in the area of diffuser. Data analysis has shown good correlation of calculation results in FlowVision with experimental results and numerical simulation in other software packages. The results obtained in FlowVision for solving the US FDA test suggest that FlowVision software package can be used for solving a wide range of hemodynamic problems.
-
Models for spatial selection during location-aware beamforming in ultra-dense millimeter wave radio access networks
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 195-216The work solves the problem of establishing the dependence of the potential for spatial selection of useful and interfering signals according to the signal-to-interference ratio criterion on the positioning error of user equipment during beamforming by their location at a base station, equipped with an antenna array. Configurable simulation parameters include planar antenna array with a different number of antenna elements, movement trajectory, as well as the accuracy of user equipment location estimation using root mean square error of coordinate estimates. The model implements three algorithms for controlling the shape of the antenna radiation pattern: 1) controlling the beam direction for one maximum and one zero; 2) controlling the shape and width of the main beam; 3) adaptive beamforming. The simulation results showed, that the first algorithm is most effective, when the number of antenna array elements is no more than 5 and the positioning error is no more than 7 m, and the second algorithm is appropriate to employ, when the number of antenna array elements is more than 15 and the positioning error is more than 5 m. Adaptive beamforming is implemented using a training signal and provides optimal spatial selection of useful and interfering signals without device location data, but is characterized by high complexity of hardware implementation. Scripts of the developed models are available for verification. The results obtained can be used in the development of scientifically based recommendations for beam control in ultra-dense millimeter-wave radio access networks of the fifth and subsequent generations.
-
The development of an ARM system on chip based processing unit for data stream computing
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 505-509Views (last year): 1.Modern big science projects are becoming highly data intensive to the point where offline processing of stored data is infeasible. High data throughput computing, or Data Stream Computing, for future projects is required to deal with terabytes of data per second which cannot be stored in long-term storage elements. Conventional data-centres based on typical server-grade hardware are expensive and are biased towards processing power. The overall I/O bandwidth can be increased with massive parallelism, usually at the expense of excessive processing power and high energy consumption. An ARM System on Chip (SoC) based processing unit may address the issue of system I/O and CPU balance, affordability and energy efficiency since ARM SoCs are mass produced and designed to be energy efficient for use in mobile devices. Such a processing unit is currently in development, with a design goal of 20 Gb/s I/O throughput and significant processing power. The I/O capabilities of consumer ARM System on Chips are discussed along with to-date performance and I/O throughput tests.
-
Models of soil organic matter dynamics: problems and perspectives
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 391-399Soil as a complex multifunctional open system is one of the most difficult object for modeling. In spite of serious achievements in the soil system modeling, existed models do not reflect all aspects and processes of soil organic matter mineralization and humification. The problems and “hot spots” in the modeling of the dynamics of soil organic matter and biophylous elements were identified on a base of creation and wide implementation of ROMUL and EFIMOD models. The following aspects are discussed: further theoretical background; improving the structure of models; preparation and uncertainty of the initial data; inclusion of all soil biota (microorganisms, micro- and meso-fauna) as factors of humification; impact of soil mineralogy on C and N dynamics; hydro-thermal regime and organic matter distribution in whole soil profile; vertical and horizontal migration of soil organic matter. An effective feedback from modellers to experimentalists is necessary to solve the listed problems.
Keywords: mathematic model, soil organic matter.Views (last year): 2. Citations: 3 (RSCI). -
Assessing the validity of clustering of panel data by Monte Carlo methods (using as example the data of the Russian regional economy)
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1501-1513The paper considers a method for studying panel data based on the use of agglomerative hierarchical clustering — grouping objects based on the similarities and differences in their features into a hierarchy of clusters nested into each other. We used 2 alternative methods for calculating Euclidean distances between objects — the distance between the values averaged over observation interval, and the distance using data for all considered years. Three alternative methods for calculating the distances between clusters were compared. In the first case, the distance between the nearest elements from two clusters is considered to be distance between these clusters, in the second — the average over pairs of elements, in the third — the distance between the most distant elements. The efficiency of using two clustering quality indices, the Dunn and Silhouette index, was studied to select the optimal number of clusters and evaluate the statistical significance of the obtained solutions. The method of assessing statistical reliability of cluster structure consisted in comparing the quality of clustering on a real sample with the quality of clustering on artificially generated samples of panel data with the same number of objects, features and lengths of time series. Generation was made from a fixed probability distribution. At the same time, simulation methods imitating Gaussian white noise and random walk were used. Calculations with the Silhouette index showed that a random walk is characterized not only by spurious regression, but also by “spurious clustering”. Clustering was considered reliable for a given number of selected clusters if the index value on the real sample turned out to be greater than the value of the 95% quantile for artificial data. A set of time series of indicators characterizing production in the regions of the Russian Federation was used as a sample of real data. For these data only Silhouette shows reliable clustering at the level p < 0.05. Calculations also showed that index values for real data are generally closer to values for random walks than for white noise, but it have significant differences from both. Since three-dimensional feature space is used, the quality of clustering was also evaluated visually. Visually, one can distinguish clusters of points located close to each other, also distinguished as clusters by the applied hierarchical clustering algorithm.
-
Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




