All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Approaches to cloud infrastructures integration
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 583-590Views (last year): 6. Citations: 11 (RSCI).One of the important direction of cloud technologies development nowadays is a creation of methods for integration of various cloud infrastructures. An actuality of such direction in academic field is caused by a frequent lack of own computing resources and a necessity to attract additional ones. This article is dedicated to existing approaches to cloud infrastructures integration with each other: federations and so called ‘cloud bursting’. A ‘federation’ in terms of OpenNebula cloud platform is built on a ‘one master zone and several slave ones’ schema. A term ‘zone’ means a separate cloud infrastructure in the federation. All zones in such kind of integration have a common database of users and the whole federation is managed via master zone only. Such approach is most suitable for a case when cloud infrastructures of geographically distributed branches of a single organization need to be integrated. But due to its high centralization it's not appropriate when one needs to join cloud infrastructures of different organizations. Moreover it's not acceptable at all in case of clouds based on different software platforms. A model of federative integration implemented in EGI Federated Cloud allows to connect clouds based on different software platforms but it requires a deployment of sufficient amount of additional services which are specific for EGI Federated Cloud only. It makes such approach is one-purpose and uncommon one. A ‘cloud bursting’ model has no limitations listed above but in case of OpenNebula platform what the Laboratory of Information Technologies of Joint Institute for Nuclear Research (LIT JINR) cloud infrastructure is based on such model was implemented for an integration with a certain set of commercial cloud resources providers. Taking into account an article authors’ experience in joining clouds of organizations they represent as well as with EGI Federation Cloud a ‘cloud bursting’ driver was developed by LIT JINR cloud team for OpenNebula-based clouds integration with each other as well as with OpenStack-based ones. The driver's architecture, technologies and protocols it relies on and an experience of its usage are described in the article.
-
The application of genetic algorithms for organizational systems’ management in case of emergency
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 533-556Views (last year): 31.Optimal management of fuel supply system boils down to choosing an energy development strategy which provides consumers with the most efficient and reliable fuel and energy supply. As a part of the program on switching the heat supply distributed management system of the Udmurt Republic to renewable energy sources, an “Information-analytical system of regional alternative fuel supply management” was developed. The paper presents the mathematical model of optimal management of fuel supply logistic system consisting of three interconnected levels: raw material accumulation points, fuel preparation points and fuel consumption points, which are heat sources. In order to increase effective the performance of regional fuel supply system a modification of information-analytical system and extension of its set of functions using the methods of quick responding when emergency occurs are required. Emergencies which occur on any one of these levels demand the management of the whole system to reconfigure. The paper demonstrates models and algorithms of optimal management in case of emergency involving break down of such production links of logistic system as raw material accumulation points and fuel preparation points. In mathematical models, the target criterion is minimization of costs associated with the functioning of logistic system in case of emergency. The implementation of the developed algorithms is based on the usage of genetic optimization algorithms, which made it possible to obtain a more accurate solution in less time. The developed models and algorithms are integrated into the information-analytical system that enables to provide effective management of alternative fuel supply of the Udmurt Republic in case of emergency.
-
Comparison of Arctic zone RF companies with different Polar Index ratings by economic criteria with the help of machine learning tools
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 201-215The paper presents a comparative analysis of the enterprises of the Arctic Zone of the Russian Federation (AZ RF) on economic indicators in accordance with the rating of the Polar index. This study includes numerical data of 193 enterprises located in the AZ RF. Machine learning methods are applied, both standard, from open source, and own original methods — the method of Optimally Reliable Partitions (ORP), the method of Statistically Weighted Syndromes (SWS). Held split, indicating the maximum value of the functional quality, this study used the simplest family of different one-dimensional partition with a single boundary point, as well as a collection of different two-dimensional partition with one boundary point on each of the two combining variables. Permutation tests allow not only to evaluate the reliability of the data of the revealed regularities, but also to exclude partitions with excessive complexity from the set of the revealed regularities. Patterns connected the class number and economic indicators are revealed using the SDT method on one-dimensional indicators. The regularities which are revealed within the framework of the simplest one-dimensional model with one boundary point and with significance not worse than p < 0.001 are also presented in the given study. The so-called sliding control method was used for reliable evaluation of such diagnostic ability. As a result of these studies, a set of methods that had sufficient effectiveness was identified. The collective method based on the results of several machine learning methods showed the high importance of economic indicators for the division of enterprises in accordance with the rating of the Polar index. Our study proved and showed that those companies that entered the top Rating of the Polar index are generally recognized by financial indicators among all companies in the Arctic Zone. However it would be useful to supplement the list of indicators with ecological and social criteria.
-
Microtubule protofilament bending characterization
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 435-443This work is devoted to the analysis of conformational changes in tubulin dimers and tetramers, in particular, the assessment of the bending of microtubule protofilaments. Three recently exploited approaches for estimating the bend of tubulin protofilaments are reviewed: (1) measurement of the angle between the vector passing through the H7 helices in $\alpha$ and $\beta$ tubulin monomers in the straight structure and the same vector in the curved structure of tubulin; (2) measurement of the angle between the vector, connecting the centers of mass of the subunit and the associated GTP nucleotide, and the vector, connecting the centers of mass of the same nucleotide and the adjacent tubulin subunit; (3) measurement of the three rotation angles of the bent tubulin subunit relative to the straight subunit. Quantitative estimates of the angles calculated at the intra- and inter-dimer interfaces of tubulin in published crystal structures, calculated in accordance with the three metrics, are presented. Intra-dimer angles of tubulin in one structure, measured by the method (3), as well as measurements by this method of the intra-dimer angles in different structures, were more similar, which indicates a lower sensitivity of the method to local changes in tubulin conformation and characterizes the method as more robust. Measuring the angle of curvature between H7-helices (method 1) produces somewhat underestimated values of the curvature per dimer. Method (2), while at first glance generating the bending angle values, consistent the with estimates of curved protofilaments from cryoelectron microscopy, significantly overestimates the angles in the straight structures. For the structures of tubulin tetramers in complex with the stathmin protein, the bending angles calculated with all three metrics varied quite significantly for the first and second dimers (up to 20% or more), which indicates the sensitivity of all metrics to slight variations in the conformation of tubulin dimers within these complexes. A detailed description of the procedures for measuring the bending of tubulin protofilaments, as well as identifying the advantages and disadvantages of various metrics, will increase the reproducibility and clarity of the analysis of tubulin structures in the future, as well as it will hopefully make it easier to compare the results obtained by various scientific groups.
-
Combining the agent approach and the general equilibrium approach to analyze the influence of the shadow sector on the Russian economy
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 669-684This article discusses the influence of the shadow, informal and household sectors on the dynamics of a stochastic model with heterogeneous (heterogeneous) agents. The study uses the integration of the general equilibrium approach to explain the behavior of demand, supply and prices in an economy with several interacting markets, and a multi-agent approach. The analyzed model describes an economy with aggregated uncertainty and with an infinite number of heterogeneous agents (households). The source of heterogeneity is the idiosyncratic income shocks of agents in the legal and shadow sectors of the economy. In the analysis, an algorithm is used to approximate the dynamics of the distribution function of the capital stocks of individual agents — the dynamics of its first and second moments. The synthesis of the agent approach and the general equilibrium approach is carried out using computer implementation of the recursive feedback between microagents and macroenvironment. The behavior of the impulse response functions of the main variables of the model confirms the positive influence of the shadow economy (below a certain limit) on minimizing the rate of decline in economic indicators during recessions, especially for developing economies. The scientific novelty of the study is the combination of a multi-agent approach and a general equilibrium approach for modeling macroeconomic processes at the regional and national levels. Further research prospects may be associated with the use of more detailed general equilibrium models, which allow, in particular, to describe the behavior of heterogeneous groups of agents in the entrepreneurial sector of the economy.
-
Boundary conditions for lattice Boltzmann equations in applications to hemodynamics
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 865-882We consider a one-dimensional three velocity kinetic lattice Boltzmann model, which represents a secondorder difference scheme for hydrodynamic equations. In the framework of kinetic theory this system describes the propagation and interaction of three types of particles. It has been shown previously that the lattice Boltzmann model with external virtual force is equivalent at the hydrodynamic limit to the one-dimensional hemodynamic equations for elastic vessels, this equivalence can be achieved with use of the Chapman – Enskog expansion. The external force in the model is responsible for the ability to adjust the functional dependence between the lumen area of the vessel and the pressure applied to the wall of the vessel under consideration. Thus, the form of the external force allows to model various elastic properties of the vessels. In the present paper the physiological boundary conditions are considered at the inlets and outlets of the arterial network in terms of the lattice Boltzmann variables. We consider the following boundary conditions: for pressure and blood flow at the inlet of the vascular network, boundary conditions for pressure and blood flow for the vessel bifurcations, wave reflection conditions (correspond to complete occlusion of the vessel) and wave absorption at the ends of the vessels (these conditions correspond to the passage of the wave without distortion), as well as RCR-type conditions, which are similar to electrical circuits and consist of two resistors (corresponding to the impedance of the vessel, at the end of which the boundary conditions are set and the friction forces in microcirculatory bed) and one capacitor (describing the elastic properties of arterioles). The numerical simulations were performed: the propagation of blood in a network of three vessels was considered, the boundary conditions for the blood flow were set at the entrance of the network, RCR boundary conditions were stated at the ends of the network. The solutions to lattice Boltzmann model are compared with the benchmark solutions (based on numerical calculations for second-order McCormack difference scheme without viscous terms), it is shown that the both approaches give very similar results.
-
Method for prediction of aerodynamic characteristics of helicopter rotors based on edge-based schemes in code NOISEtte
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1097-1122The paper gives a detailed description of the developed methods for simulating the turbulent flow around a helicopter rotor and calculating its aerodynamic characteristics. The system of Reynolds-averaged Navier – Stokes equations for a viscous compressible gas closed by the Spalart –Allmaras turbulence model is used as the basic mathematical model. The model is formulated in a non-inertial rotating coordinate system associated with a rotor. To set the boundary conditions on the surface of the rotor, wall functions are used.
The numerical solution of the resulting system of differential equations is carried out on mixed-element unstructured grids including prismatic layers near the surface of a streamlined body.The numerical method is based on the original vertex-centered finite-volume EBR schemes. A feature of these schemes is their higher accuracy which is achieved through the use of edge-based reconstruction of variables on extended quasi-onedimensional stencils, and a moderate computational cost which allows for serial computations. The methods of Roe and Lax – Friedrichs are used as approximate Riemann solvers. The Roe method is corrected in the case of low Mach flows. When dealing with discontinuities or solutions with large gradients, a quasi-one-dimensional WENO scheme or local switching to a quasi-one-dimensional TVD-type reconstruction is used. The time integration is carried out according to the implicit three-layer second-order scheme with Newton linearization of the system of difference equations. To solve the system of linear equations, the stabilized conjugate gradient method is used.
The numerical methods are implemented as a part of the in-house code NOISEtte according to the two-level MPI–OpenMP parallel model, which allows high-performance computations on meshes consisting of hundreds of millions of nodes, while involving hundreds of thousands of CPU cores of modern supercomputers.
Based on the results of numerical simulation, the aerodynamic characteristics of the helicopter rotor are calculated, namely, trust, torque and their dimensionless coefficients.
Validation of the developed technique is carried out by simulating the turbulent flow around the Caradonna – Tung two-blade rotor and the KNRTU-KAI four-blade model rotor in hover mode mode, tail rotor in duct, and rigid main rotor in oblique flow. The numerical results are compared with the available experimental data.
-
Analysis of the effectiveness of machine learning methods in the problem of gesture recognition based on the data of electromyographic signals
Computer Research and Modeling, 2021, v. 13, no. 1, pp. 175-194Gesture recognition is an urgent challenge in developing systems of human-machine interfaces. We analyzed machine learning methods for gesture classification based on electromyographic muscle signals to identify the most effective one. Methods such as the naive Bayesian classifier (NBC), logistic regression, decision tree, random forest, gradient boosting, support vector machine (SVM), $k$-nearest neighbor algorithm, and ensembles (NBC and decision tree, NBC and gradient boosting, gradient boosting and decision tree) were considered. Electromyography (EMG) was chosen as a method of obtaining information about gestures. This solution does not require the location of the hand in the field of view of the camera and can be used to recognize finger movements. To test the effectiveness of the selected methods of gesture recognition, a device was developed for recording the EMG signal, which includes three electrodes and an EMG sensor connected to the microcontroller and the power supply. The following gestures were chosen: clenched fist, “thumb up”, “Victory”, squeezing an index finger and waving a hand from right to left. Accuracy, precision, recall and execution time were used to evaluate the effectiveness of classifiers. These parameters were calculated for three options for the location of EMG electrodes on the forearm. According to the test results, the most effective methods are $k$-nearest neighbors’ algorithm, random forest and the ensemble of NBC and gradient boosting, the average accuracy of ensemble for three electrode positions was 81.55%. The position of the electrodes was also determined at which machine learning methods achieve the maximum accuracy. In this position, one of the differential electrodes is located at the intersection of the flexor digitorum profundus and flexor pollicis longus, the second — above the flexor digitorum superficialis.
-
Extracting knowledge from text messages: overview and state-of-the-art
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.
-
Stochastic optimization in digital pre-distortion of the signal
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 399-416In this paper, we test the performance of some modern stochastic optimization methods and practices with respect to the digital pre-distortion problem, which is a valuable part of processing signal on base stations providing wireless communication. In the first part of our study, we focus on the search for the best performing method and its proper modifications. In the second part, we propose the new, quasi-online, testing framework that allows us to fit our modeling results with the behavior of real-life DPD prototype, retest some selected of practices considered in the previous section and approve the advantages of the method appearing to be the best under real-life conditions. For the used model, the maximum achieved improvement in depth is 7% in the standard regime and 5% in the online regime (metric itself is of logarithmic scale). We also achieve a halving of the working time preserving 3% and 6% improvement in depth for the standard and online regime, respectively. All comparisons are made to the Adam method, which was highlighted as the best stochastic method for DPD problem in [Pasechnyuk et al., 2021], and to the Adamax method, which is the best in the proposed online regime.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"