All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Running applications on a hybrid cluster
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 475-483Views (last year): 4.A hybrid cluster implies the use of computational devices with radically different architectures. Usually, these are conventional CPU architecture (e.g. x86_64) and GPU architecture (e. g. NVIDIA CUDA). Creating and exploiting such a cluster requires some experience: in order to harness all computational power of the described system and get substantial speedup for computational tasks many factors should be taken into account. These factors consist of hardware characteristics (e.g. network infrastructure, a type of data storage, GPU architecture) as well as software stack (e.g. MPI implementation, GPGPU libraries). So, in order to run scientific applications GPU capabilities, software features, task size and other factors should be considered.
This report discusses opportunities and problems of hybrid computations. Some statistics from tests programs and applications runs will be demonstrated. The main focus of interest is open source applications (e. g. OpenFOAM) that support GPGPU (with some parts rewritten to use GPGPU directly or by replacing libraries).
There are several approaches to organize heterogeneous computations for different GPU architectures out of which CUDA library and OpenCL framework are compared. CUDA library is becoming quite typical for hybrid systems with NVIDIA cards, but OpenCL offers portability opportunities which can be a determinant factor when choosing framework for development. We also put emphasis on multi-GPU systems that are often used to build hybrid clusters. Calculations were performed on a hybrid cluster of SPbU computing center.
-
A plankton community: a zooplankton effect in phytoplankton dynamics
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 751-768Views (last year): 3.The paper uses methods of mathematical modeling to estimate a zooplankton influence on the dynamics of phytoplankton abundance. We propose a three-component model of the “phytoplankton–zooplankton” community with discrete time, considering a heterogeneity of zooplankton according to the developmental stage and type of feeding; the model takes into account cannibalism in zooplankton community, during which mature individuals of some of its species consume juvenile ones. Survival rates at the early stages of zooplankton life cycle depend explicitly on the interaction between zooplankton and phytoplankton. Loss of phytoplankton biomass because of zooplankton consumption is explicitly considered. We use the Holling functional response of type II to describe saturation during biomass consumption. The dynamics of the phytoplankton community is represented by the Ricker model, which allows to take into account the restriction of phytoplankton biomass growth by the availability of external resources (mineral nutrition, oxygen, light, etc.) implicitly.
The study analyzed scenarios of the transition from stationary dynamics to fluctuations in the size of phytoand zooplankton for various values of intrapopulation parameters determining the nature of the dynamics of the species constituting the community, and the parameters of their interaction. The focus is on exploring the complex modes of community dynamics. In the framework of the model used for describing dynamics of phytoplankton in the absence of interspecific interaction, phytoplankton dynamics undergoes a series of perioddoubling bifurcations. At the same time, with zooplankton appearance, the cascade of period-doubling bifurcations in phytoplankton and the community as a whole is realized earlier (at lower reproduction rates of phytoplankton cells) than in the case when phytoplankton develops in isolation. Furthermore, the variation in the cannibalism level in zooplankton can significantly change both the existing dynamics in the community and its bifurcation; e.g., with a certain structure of zooplankton food relationships the realization of Neimark–Sacker bifurcation scenario in the community is possible. Considering the cannibalism level in zooplankton can change due to the natural maturation processes and achievement of the carnivorous stage by some individuals, one can expect pronounced changes in the dynamic mode of the community, i.e. abrupt transitions from regular to quasiperiodic dynamics (according to Neimark–Sacker scenario) and further cycles with a short period (the implementation of period halving bifurcation).
-
Biomathematical system of the nucleic acids description
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 417-434The article is devoted to the application of various methods of mathematical analysis, search for patterns and studying the composition of nucleotides in DNA sequences at the genomic level. New methods of mathematical biology that made it possible to detect and visualize the hidden ordering of genetic nucleotide sequences located in the chromosomes of cells of living organisms described. The research was based on the work on algebraic biology of the doctor of physical and mathematical sciences S. V. Petukhov, who first introduced and justified new algebras and hypercomplex numerical systems describing genetic phenomena. This paper describes a new phase in the development of matrix methods in genetics for studying the properties of nucleotide sequences (and their physicochemical parameters), built on the principles of finite geometry. The aim of the study is to demonstrate the capabilities of new algorithms and discuss the discovered properties of genetic DNA and RNA molecules. The study includes three stages: parameterization, scaling, and visualization. Parametrization is the determination of the parameters taken into account, which are based on the structural and physicochemical properties of nucleotides as elementary components of the genome. Scaling plays the role of “focusing” and allows you to explore genetic structures at various scales. Visualization includes the selection of the axes of the coordinate system and the method of visual display. The algorithms presented in this work are put forward as a new toolkit for the development of research software for the analysis of long nucleotide sequences with the ability to display genomes in parametric spaces of various dimensions. One of the significant results of the study is that new criteria were obtained for the classification of the genomes of various living organisms to identify interspecific relationships. The new concept allows visually and numerically assessing the variability of the physicochemical parameters of nucleotide sequences. This concept also allows one to substantiate the relationship between the parameters of DNA and RNA molecules with fractal geometric mosaics, reveals the ordering and symmetry of polynucleotides, as well as their noise immunity. The results obtained justified the introduction of new terms: “genometry” as a methodology of computational strategies and “genometrica” as specific parameters of a particular genome or nucleotide sequence. In connection with the results obtained, biosemiotics and hierarchical levels of organization of living matter are raised.
-
Optimization of the brain command dictionary based on the statistical proximity criterion in silent speech recognition task
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 675-690In our research, we focus on the problem of classification for silent speech recognition to develop a brain– computer interface (BCI) based on electroencephalographic (EEG) data, which will be capable of assisting people with mental and physical disabilities and expanding human capabilities in everyday life. Our previous research has shown that the silent pronouncing of some words results in almost identical distributions of electroencephalographic signal data. Such a phenomenon has a suppressive impact on the quality of neural network model behavior. This paper proposes a data processing technique that distinguishes between statistically remote and inseparable classes in the dataset. Applying the proposed approach helps us reach the goal of maximizing the semantic load of the dictionary used in BCI.
Furthermore, we propose the existence of a statistical predictive criterion for the accuracy of binary classification of the words in a dictionary. Such a criterion aims to estimate the lower and the upper bounds of classifiers’ behavior only by measuring quantitative statistical properties of the data (in particular, using the Kolmogorov – Smirnov method). We show that higher levels of classification accuracy can be achieved by means of applying the proposed predictive criterion, making it possible to form an optimized dictionary in terms of semantic load for the EEG-based BCIs. Furthermore, using such a dictionary as a training dataset for classification problems grants the statistical remoteness of the classes by taking into account the semantic and phonetic properties of the corresponding words and improves the classification behavior of silent speech recognition models.
-
Stochastic optimization in digital pre-distortion of the signal
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 399-416In this paper, we test the performance of some modern stochastic optimization methods and practices with respect to the digital pre-distortion problem, which is a valuable part of processing signal on base stations providing wireless communication. In the first part of our study, we focus on the search for the best performing method and its proper modifications. In the second part, we propose the new, quasi-online, testing framework that allows us to fit our modeling results with the behavior of real-life DPD prototype, retest some selected of practices considered in the previous section and approve the advantages of the method appearing to be the best under real-life conditions. For the used model, the maximum achieved improvement in depth is 7% in the standard regime and 5% in the online regime (metric itself is of logarithmic scale). We also achieve a halving of the working time preserving 3% and 6% improvement in depth for the standard and online regime, respectively. All comparisons are made to the Adam method, which was highlighted as the best stochastic method for DPD problem in [Pasechnyuk et al., 2021], and to the Adamax method, which is the best in the proposed online regime.
-
Classification of pest-damaged coniferous trees in unmanned aerial vehicles images using convolutional neural network models
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1271-1294This article considers the task of multiclass classification of coniferous trees with varying degrees of damage by insect pests on images obtained using unmanned aerial vehicles (UAVs). We propose the use of convolutional neural networks (CNNs) for the classification of fir trees Abies sibirica and Siberian pine trees Pinus sibirica in unmanned aerial vehicles (UAV) imagery. In our approach, we develop three CNN models based on the classical U-Net architecture, designed for pixel-wise classification of images (semantic segmentation). The first model, Mo-U-Net, incorporates several changes to the classical U-Net model. The second and third models, MSC-U-Net and MSC-Res-U-Net, respectively, form ensembles of three Mo-U-Net models, each varying in depth and input image sizes. Additionally, the MSC-Res-U-Net model includes the integration of residual blocks. To validate our approach, we have created two datasets of UAV images depicting trees affected by pests, specifically Abies sibirica and Pinus sibirica, and trained the proposed three CNN models utilizing mIoULoss and Focal Loss as loss functions. Subsequent evaluation focused on the effectiveness of each trained model in classifying damaged trees. The results obtained indicate that when mIoULoss served as the loss function, the proposed models fell short of practical applicability in the forestry industry, failing to achieve classification accuracy above the threshold value of 0.5 for individual classes of both tree species according to the IoU metric. However, under Focal Loss, the MSC-Res-U-Net and Mo-U-Net models, in contrast to the third proposed model MSC-U-Net, exhibited high classification accuracy (surpassing the threshold value of 0.5) for all classes of Abies sibirica and Pinus sibirica trees. Thus, these results underscore the practical significance of the MSC-Res-U-Net and Mo-U-Net models for forestry professionals, enabling accurate classification and early detection of pest outbreaks in coniferous trees.
-
Generating database schema from requirement specification based on natural language processing and large language model
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1703-1713A Large Language Model (LLM) is an advanced artificial intelligence algorithm that utilizes deep learning methodologies and extensive datasets to process, understand, and generate humanlike text. These models are capable of performing various tasks, such as summarization, content creation, translation, and predictive text generation, making them highly versatile in applications involving natural language understanding. Generative AI, often associated with LLMs, specifically focuses on creating new content, particularly text, by leveraging the capabilities of these models. Developers can harness LLMs to automate complex processes, such as extracting relevant information from system requirement documents and translating them into a structured database schema. This capability has the potential to streamline the database design phase, saving significant time and effort while ensuring that the resulting schema aligns closely with the given requirements. By integrating LLM technology with Natural Language Processing (NLP) techniques, the efficiency and accuracy of generating database schemas based on textual requirement specifications can be significantly enhanced. The proposed tool will utilize these capabilities to read system requirement specifications, which may be provided as text descriptions or as Entity-Relationship Diagrams (ERDs). It will then analyze the input and automatically generate a relational database schema in the form of SQL commands. This innovation eliminates much of the manual effort involved in database design, reduces human errors, and accelerates development timelines. The aim of this work is to provide a tool can be invaluable for software developers, database architects, and organizations aiming to optimize their workflow and align technical deliverables with business requirements seamlessly.
-
Simulation of traffic flows based on the quasi-gasdynamic approach and the cellular automata theory using supercomputers
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 175-194The purpose of the study is to simulate the dynamics of traffic flows on city road networks as well as to systematize the current state of affairs in this area. The introduction states that the development of intelligent transportation systems as an integral part of modern transportation technologies is coming to the fore. The core of these systems contain adequate mathematical models that allow to simulate traffic as close to reality as possible. The necessity of using supercomputers due to the large amount of calculations is also noted, therefore, the creation of special parallel algorithms is needed. The beginning of the article is devoted to the up-to-date classification of traffic flow models and characterization of each class, including their distinctive features and relevant examples with links. Further, the main focus of the article is shifted towards the development of macroscopic and microscopic models, created by the authors, and determination of the place of these models in the aforementioned classification. The macroscopic model is based on the continuum approach and uses the ideology of quasi-gasdynamic systems of equations. Its advantages are indicated in comparison with existing models of this class. The model is presented both in one-dimensional and two-dimensional versions. The both versions feature the ability to study multi-lane traffic. In the two-dimensional version it is made possible by introduction of the concept of “lateral” velocity, i. e., the speed of changing lanes. The latter version allows for carrying out calculations in the computational domain which corresponds to the actual geometry of the road. The section also presents the test results of modeling vehicle dynamics on a road fragment with the local widening and on a road fragment with traffic lights, including several variants of traffic light regimes. In the first case, the calculations allow to draw interesting conclusions about the impact of a road widening on a road capacity as a whole, and in the second case — to select the optimal regime configuration to obtain the “green wave” effect. The microscopic model is based on the cellular automata theory and the single-lane Nagel – Schreckenberg model and is generalized for the multi-lane case by the authors of the article. The model implements various behavioral strategies of drivers. Test computations for the real transport network section in Moscow city center are presented. To achieve an adequate representation of vehicles moving through the network according to road traffic regulations the authors implemented special algorithms adapted for parallel computing. Test calculations were performed on the K-100 supercomputer installed in the Centre of Collective Usage of KIAM RAS.
-
Valuation of machines at the random process of their degradation and premature sales
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 797-815The model of the process of using machinery and equipment is considered, which takes into account the probabilistic nature of the process of their operation and sale. It takes into account the possibility of random hidden failures, after which the condition of the machine deteriorates abruptly, as well as the randomly arising need for premature (before the end of its service life) sale of the machine, which requires, generally speaking, random time. The model is focused on assessing the market value and service life of machines in accordance with International Valuation Standards. Strictly speaking, the market value of a used machine depends on its technical condition, but in practice, appraisers only take into account its age, since generally accepted measures of the technical condition of machines do not yet exist. As a result, the market value of a used machine is assumed to be equal to the average market value of similar machines of the corresponding age. For these purposes, appraisers use coefficients that reflect the influence of the age of machines on their market value. Such coefficients are not always justified and do not take into account either the degradation of the machine or the probabilistic nature of the process of its use. The proposed model is based on the anticipation of benefits principle. In it, we characterize the state of the machine by the intensity of the benefits it brings. The machine is subjected to a complex Poisson failure process, and after failure its condition abruptly worsens and may even reach its limit. Situations also arise that preclude further use of the machine by its owner. In such situations, the owner puts the machine up for sale before the end of its service life (prematurely), and the sale requires a random timing. The model allows us to take into account the influence of such situations and construct an analytical relationship linking the market value of a machine with its condition, and calculate the average coefficients of change in the market value of machines with age. At the same time, it is also possible to take into account the influence of inflation and the scrap cost of the machine. We have found that the rate of prematurely sales has a significant impact on the cost of new and used machines. The model also allows us to take into account the influence of inflation and the scrap value of the machine. We have found that the rate of premature sales has a significant impact on the service life and market value of new and used machines. At the same time, the dependence of the market value of machines on age is largely determined by the coefficient of variation of the service life of the machines. The results obtained allow us to obtain more reasonable estimates of the market value of machines, including for the purposes of the system of national accounts.
-
Changepoint detection in biometric data: retrospective nonparametric segmentation methods based on dynamic programming and sliding windows
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1295-1321This paper is dedicated to the analysis of medical and biological data obtained through locomotor training and testing of astronauts conducted both on Earth and during spaceflight. These experiments can be described as the astronaut’s movement on a treadmill according to a predefined regimen in various speed modes. During these modes, not only the speed is recorded but also a range of parameters, including heart rate, ground reaction force, and others, are collected. In order to analyze the dynamics of the astronaut’s condition over an extended period, it is necessary to perform a qualitative segmentation of their movement modes to independently assess the target metrics. This task becomes particularly relevant in the development of an autonomous life support system for astronauts that operates without direct supervision from Earth. The segmentation of target data is complicated by the presence of various anomalies, such as deviations from the predefined regimen, arbitrary and varying duration of mode transitions, hardware failures, and other factors. The paper includes a detailed review of several contemporary retrospective (offline) nonparametric methods for detecting multiple changepoints, which refer to sudden changes in the properties of the observed time series occurring at unknown moments. Special attention is given to algorithms and statistical measures that determine the homogeneity of the data and methods for detecting change points. The paper considers approaches based on dynamic programming and sliding window methods. The second part of the paper focuses on the numerical modeling of these methods using characteristic examples of experimental data, including both “simple” and “complex” speed profiles of movement. The analysis conducted allowed us to identify the preferred methods, which will be further evaluated on the complete dataset. Preference is given to methods that ensure the closeness of the markup to a reference one, potentially allow the detection of both boundaries of transient processes, as well as are robust relative to internal parameters.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




