Результаты поиска по 'clustering':
Найдено статей: 48
  1. Bogdanov A.V., Gankevich I.G., Gayduchok V.Yu., Yuzhanin N.V.
    Running applications on a hybrid cluster
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 475-483

    A hybrid cluster implies the use of computational devices with radically different architectures. Usually, these are conventional CPU architecture (e.g. x86_64) and GPU architecture (e. g. NVIDIA CUDA). Creating and exploiting such a cluster requires some experience: in order to harness all computational power of the described system and get substantial speedup for computational tasks many factors should be taken into account. These factors consist of hardware characteristics (e.g. network infrastructure, a type of data storage, GPU architecture) as well as software stack (e.g. MPI implementation, GPGPU libraries). So, in order to run scientific applications GPU capabilities, software features, task size and other factors should be considered.

    This report discusses opportunities and problems of hybrid computations. Some statistics from tests programs and applications runs will be demonstrated. The main focus of interest is open source applications (e. g. OpenFOAM) that support GPGPU (with some parts rewritten to use GPGPU directly or by replacing libraries).

    There are several approaches to organize heterogeneous computations for different GPU architectures out of which CUDA library and OpenCL framework are compared. CUDA library is becoming quite typical for hybrid systems with NVIDIA cards, but OpenCL offers portability opportunities which can be a determinant factor when choosing framework for development. We also put emphasis on multi-GPU systems that are often used to build hybrid clusters. Calculations were performed on a hybrid cluster of SPbU computing center.

    Views (last year): 4.
  2. Volokhova A.V., Zemlyanay E.V., Kachalov V.V., Rikhvitskiy V.S.
    Simulation of the gas condensate reservoir depletion
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1081-1095

    One of problems in developing the gas condensate fields lies on the fact that the condensed hydrocarbons in the gas-bearing layer can get stuck in the pores of the formation and hence cannot be extracted. In this regard, research is underway to increase the recoverability of hydrocarbons in such fields. This research includes a wide range of studies on mathematical simulations of the passage of gas condensate mixtures through a porous medium under various conditions.

    In the present work, within the classical approach based on the Darcy law and the law of continuity of flows, we formulate an initial-boundary value problem for a system of nonlinear differential equations that describes a depletion of a multicomponent gas-condensate mixture in porous reservoir. A computational scheme is developed on the basis of the finite-difference approximation and the fourth order Runge .Kutta method. The scheme can be used for simulations both in the spatially one-dimensional case, corresponding to the conditions of the laboratory experiment, and in the two-dimensional case, when it comes to modeling a flat gas-bearing formation with circular symmetry.

    The computer implementation is based on the combination of C++ and Maple tools, using the MPI parallel programming technique to speed up the calculations. The calculations were performed on the HybriLIT cluster of the Multifunctional Information and Computing Complex of the Laboratory of Information Technologies of the Joint Institute for Nuclear Research.

    Numerical results are compared with the experimental data on the pressure dependence of output of a ninecomponent hydrocarbon mixture obtained at a laboratory facility (VNIIGAZ, Ukhta). The calculations were performed for two types of porous filler in the laboratory model of the formation: terrigenous filler at 25 .„R and carbonate one at 60 .„R. It is shown that the approach developed ensures an agreement of the numerical results with experimental data. By fitting of numerical results to experimental data on the depletion of the laboratory reservoir, we obtained the values of the parameters that determine the inter-phase transition coefficient for the simulated system. Using the same parameters, a computer simulation of the depletion of a thin gas-bearing layer in the circular symmetry approximation was carried out.

  3. Belotelov N.V., Loginov F.V.
    The agent model of intercultural interactions: the emergence of cultural uncertainties
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1143-1162

    The article describes a simulation agent-based model of intercultural interactions in a country whose population belongs to different cultures. It is believed that the space of cultures can be represented as a Hilbert space, in which certain subspaces correspond to different cultures. In the model, the concept of culture is understood as a structured subspace of the Hilbert space. This makes it possible to describe the state of agents by a vector in a Hilbert space. It is believed that each agent is described by belonging to a certain «culture». The number of agents belonging to certain cultures is determined by demographic processes that correspond to these cultures, the depth and integrity of the educational process, as well as the intensity of intercultural contacts. Interaction between agents occurs within clusters, into which, according to certain criteria, the entire set of agents is divided. When agents interact according to a certain algorithm, the length and angle that characterize the state of the agent change. In the process of imitation, depending on the number of agents belonging to different cultures, the intensity of demographic and educational processes, as well as the intensity of intercultural contacts, aggregates of agents (clusters) are formed, the agents of which belong to different cultures. Such intercultural clusters do not entirely belong to any of the cultures initially considered in the model. Such intercultural clusters create uncertainties in cultural dynamics. The paper presents the results of simulation experiments that illustrate the influence of demographic and educational processes on the dynamics of intercultural clusters. The issues of the development of the proposed approach to the study (discussion) of the transitional states of the development of cultures are discussed.

  4. Plokhotnikov K.E.
    The problem of choosing solutions in the classical format of the description of a molecular system
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1573-1600

    The numerical methods developed by the author recently for calculating the molecular system based on the direct solution of the Schrodinger equation by the Monte Carlo method have shown a huge uncertainty in the choice of solutions. On the one hand, it turned out to be possible to build many new solutions; on the other hand, the problem of their connection with reality has become sharply aggravated. In ab initio quantum mechanical calculations, the problem of choosing solutions is not so acute after the transition to the classical format of describing a molecular system in terms of potential energy, the method of molecular dynamics, etc. In this paper, we investigate the problem of choosing solutions in the classical format of describing a molecular system without taking into account quantum mechanical prerequisites. As it turned out, the problem of choosing solutions in the classical format of describing a molecular system is reduced to a specific marking of the configuration space in the form of a set of stationary points and reconstruction of the corresponding potential energy function. In this formulation, the solution of the choice problem is reduced to two possible physical and mathematical problems: to find all its stationary points for a given potential energy function (the direct problem of the choice problem), to reconstruct the potential energy function for a given set of stationary points (the inverse problem of the choice problem). In this paper, using a computational experiment, the direct problem of the choice problem is discussed using the example of a description of a monoatomic cluster. The number and shape of the locally equilibrium (saddle) configurations of the binary potential are numerically estimated. An appropriate measure is introduced to distinguish configurations in space. The format of constructing the entire chain of multiparticle contributions to the potential energy function is proposed: binary, threeparticle, etc., multiparticle potential of maximum partiality. An infinite number of locally equilibrium (saddle) configurations for the maximum multiparticle potential is discussed and illustrated. A method of variation of the number of stationary points by combining multiparticle contributions to the potential energy function is proposed. The results of the work listed above are aimed at reducing the huge arbitrariness of the choice of the form of potential that is currently taking place. Reducing the arbitrariness of choice is expressed in the fact that the available knowledge about the set of a very specific set of stationary points is consistent with the corresponding form of the potential energy function.

  5. Okhapkina E.P., Okhapkin V.P.
    Approaches to a social network groups clustering
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1127-1139

    The research is devoted to the problem of the use of social networks as a tool of the illegal activity and as a source of information that could be dangerous to society. The article presents the structure of the multiagent system with which a social network groups could be clustered according to the criteria uniquely defines a group as a destructive. The agents’ of the system clustering algorithm is described.

    Views (last year): 8. Citations: 2 (RSCI).
  6. Musaev A.A., Grigoriev D.A.
    Extracting knowledge from text messages: overview and state-of-the-art
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315

    In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.

  7. Ignatev N.A., Tuliev U.Y.
    Semantic structuring of text documents based on patterns of natural language entities
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1185-1197

    The technology of creating patterns from natural language words (concepts) based on text data in the bag of words model is considered. Patterns are used to reduce the dimension of the original space in the description of documents and search for semantically related words by topic. The process of dimensionality reduction is implemented through the formation of patterns of latent features. The variety of structures of document relations is investigated in order to divide them into themes in the latent space.

    It is considered that a given set of documents (objects) is divided into two non-overlapping classes, for the analysis of which it is necessary to use a common dictionary. The belonging of words to a common vocabulary is initially unknown. Class objects are considered as opposition to each other. Quantitative parameters of oppositionality are determined through the values of the stability of each feature and generalized assessments of objects according to non-overlapping sets of features.

    To calculate the stability, the feature values are divided into non-intersecting intervals, the optimal boundaries of which are determined by a special criterion. The maximum stability is achieved under the condition that the boundaries of each interval contain values of one of the two classes.

    The composition of features in sets (patterns of words) is formed from a sequence ordered by stability values. The process of formation of patterns and latent features based on them is implemented according to the rules of hierarchical agglomerative grouping.

    A set of latent features is used for cluster analysis of documents using metric grouping algorithms. The analysis applies the coefficient of content authenticity based on the data on the belonging of documents to classes. The coefficient is a numerical characteristic of the dominance of class representatives in groups.

    To divide documents into topics, it is proposed to use the union of groups in relation to their centers. As patterns for each topic, a sequence of words ordered by frequency of occurrence from a common dictionary is considered.

    The results of a computational experiment on collections of abstracts of scientific dissertations are presented. Sequences of words from the general dictionary on 4 topics are formed.

  8. Malkov S.Yu., Davydova O.I.
    Modernization as a global process: the experience of mathematical modeling
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 859-873

    The article analyzes empirical data on the long-term demographic and economic dynamics of the countries of the world for the period from the beginning of the 19th century to the present. Population and GDP of a number of countries of the world for the period 1500–2016 were selected as indicators characterizing the long-term demographic and economic dynamics of the countries of the world. Countries were chosen in such a way that they included representatives with different levels of development (developed and developing countries), as well as countries from different regions of the world (North America, South America, Europe, Asia, Africa). A specially developed mathematical model was used for modeling and data processing. The presented model is an autonomous system of differential equations that describes the processes of socio-economic modernization, including the process of transition from an agrarian society to an industrial and post-industrial one. The model contains the idea that the process of modernization begins with the emergence of an innovative sector in a traditional society, developing on the basis of new technologies. The population is gradually moving from the traditional sector to the innovation sector. Modernization is completed when most of the population moves to the innovation sector.

    Statistical methods of data processing and Big Data methods, including hierarchical clustering were used. Using the developed algorithm based on the random descent method, the parameters of the model were identified and verified on the basis of empirical series, and the model was tested using statistical data reflecting the changes observed in developed and developing countries during the period of modernization taking place over the past centuries. Testing the model has demonstrated its high quality — the deviations of the calculated curves from statistical data are usually small and occur during periods of wars and economic crises. Thus, the analysis of statistical data on the long-term demographic and economic dynamics of the countries of the world made it possible to determine general patterns and formalize them in the form of a mathematical model. The model will be used to forecast demographic and economic dynamics in different countries of the world.

  9. Fedorov V.A., Khruschev S.S., Kovalenko I.B.
    Analysis of Brownian and molecular dynamics trajectories of to reveal the mechanisms of protein-protein interactions
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 723-738

    The paper proposes a set of fairly simple analysis algorithms that can be used to analyze a wide range of protein-protein interactions. In this work, we jointly use the methods of Brownian and molecular dynamics to describe the process of formation of a complex of plastocyanin and cytochrome f proteins in higher plants. In the diffusion-collision complex, two clusters of structures were revealed, the transition between which is possible with the preservation of the position of the center of mass of the molecules and is accompanied only by a rotation of plastocyanin by 134 degrees. The first and second clusters of structures of collisional complexes differ in that in the first cluster with a positively charged region near the small domain of cytochrome f, only the “lower” plastocyanin region contacts, while in the second cluster, both negatively charged regions. The “upper” negatively charged region of plastocyanin in the first cluster is in contact with the amino acid residue of lysine K122. When the final complex is formed, the plastocyanin molecule rotates by 69 degrees around an axis passing through both areas of electrostatic contact. With this rotation, water is displaced from the regions located near the cofactors of the molecules and formed by hydrophobic amino acid residues. This leads to the appearance of hydrophobic contacts, a decrease in the distance between the cofactors to a distance of less than 1.5 nm, and further stabilization of the complex in a position suitable for electron transfer. Characteristics such as contact matrices, rotation axes during the transition between states, and graphs of changes in the number of contacts during the modeling process make it possible to determine the key amino acid residues involved in the formation of the complex and to reveal the physicochemical mechanisms underlying this process.

  10. Aksenov A.A., Kalugina M.D., Lobanov A.I., Kashirin V.S.
    Numerical simulation of fluid flow in a blood pump in the FlowVision software package
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1025-1038

    A numerical simulation of fluid flow in a blood pump was performed using the FlowVision software package. This test problem, provided by the Center for Devices and Radiological Health of the US. Food and Drug Administration, involved considering fluid flow according to several design modes. At the same time for each case of calculation a certain value of liquid flow rate and rotor speed was set. Necessary data for calculations in the form of exact geometry, flow conditions and fluid characteristics were provided to all research participants, who used different software packages for modeling. Numerical simulations were performed in FlowVision for six calculation modes with the Newtonian fluid and standard $k-\varepsilon$ turbulence model, in addition, the fifth mode with the $k-\omega$ SST turbulence model and with the Caro rheological fluid model were performed. In the first stage of the numerical simulation, the convergence over the mesh was investigated, on the basis of which a final mesh with a number of cells of the order of 6 million was chosen. Due to the large number of cells, in order to accelerate the study, part of the calculations was performed on the Lomonosov-2 cluster. As a result of numerical simulation, we obtained and analyzed values of pressure difference between inlet and outlet of the pump, velocity between rotor blades and in the area of diffuser, and also, we carried out visualization of velocity distribution in certain cross-sections. For all design modes there was compared the pressure difference received numerically with the experimental data, and for the fifth calculation mode there was also compared with the experiment by speed distribution between rotor blades and in the area of diffuser. Data analysis has shown good correlation of calculation results in FlowVision with experimental results and numerical simulation in other software packages. The results obtained in FlowVision for solving the US FDA test suggest that FlowVision software package can be used for solving a wide range of hemodynamic problems.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"