All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Application of the Dynamic Mode Decomposition in search of unstable modes in laminar-turbulent transition problem
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1069-1090Laminar-turbulent transition is the subject of an active research related to improvement of economic efficiency of air vehicles, because in the turbulent boundary layer drag increases, which leads to higher fuel consumption. One of the directions of such research is the search for efficient methods, that can be used to find the position of the transition in space. Using this information about laminar-turbulent transition location when designing an aircraft, engineers can predict its performance and profitability at the initial stages of the project. Traditionally, eN method is applied to find the coordinates of a laminar-turbulent transition. It is a well known approach in industry. However, despite its widespread use, this method has a number of significant drawbacks, since it relies on parallel flow assumption, which limits the scenarios for its application, and also requires computationally expensive calculations in a wide range of frequencies and wave numbers. Alternatively, flow analysis can be done by using Dynamic Mode Decomposition, which allows one to analyze flow disturbances using flow data directly. Since Dynamic Mode Decomposition is a dimensionality reduction method, the number of computations can be dramatically reduced. Furthermore, usage of Dynamic Mode Decomposition expands the applicability of the whole method, due to the absence of assumptions about the parallel flow in its derivation.
The presented study proposes an approach to finding the location of a laminar-turbulent transition using the Dynamic Mode Decomposition method. The essence of this approach is to divide the boundary layer region into sets of subregions, for each of which the transition point is independently calculated, using Dynamic Mode Decomposition for flow analysis, after which the results are averaged to produce the final result. This approach is validated by laminar-turbulent transition predictions of subsonic and supersonic flows over a 2D flat plate with zero pressure gradient. The results demonstrate the fundamental applicability and high accuracy of the described method in a wide range of conditions. The study focuses on comparison with the eN method and proves the advantages of the proposed approach. It is shown that usage of Dynamic Mode Decomposition leads to significantly faster execution due to less intensive computations, while the accuracy is comparable to the such of the solution obtained with the eN method. This indicates the prospects for using the described approach in a real world applications.
-
High Performance Computing for Blood Modeling
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 917-941Views (last year): 2. Citations: 3 (RSCI).Methods for modeling blood flow and its rheological properties are reviewed. Blood is considered as a particle suspencion. The methods are boundary integral equation method (BIEM), lattice Boltzmann (LBM), finite elements on dynamic mesh, dissipative particle dynamics (DPD) and agent based modeling. The analysis of these methods’ applications on high-performance systems with various architectures is presented.
-
Distributed dCache-based storage system of UB RAS
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 559-563Citations: 3 (RSCI).The approach to build territorial distributed storage system for high performance computing environment of UB RAS is presented. The storage system is based on the dCache middleware from the European Middleware Initiative project. The first milestone of distributed storage system implementation includes the data centers at the two UB RAS Regions: Yekaterinburg and Perm.
-
A CPU benchmarking characterization of ARM based processors
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 581-586Views (last year): 1.Big science projects are producing data at ever increases rates. Typical techniques involve storing the data to disk, after minor filtering, and then processing it in large computer farms. Data production has reached a point where on-line processing is required in order to filter the data down to manageable sizes. A potential solution involves using low-cost, low-power ARM processors in large arrays to provide massive parallelisation for data stream computing (DSC). The main advantage in using System on Chips (SoCs) is inherent in its design philosophy. SoCs are primarily used in mobile devices and hence consume less power while maintaining relatively good performance. A benchmarking characterisation of three different models of ARM processors will be presented.
-
Memory benchmarking characterisation of ARM-based SoCs
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 607-613Computational intensity is traditionally the focus of large-scale computing system designs, generally leaving such designs ill-equipped to efficiently handle throughput-oriented workloads. In addition, cost and energy consumption considerations for large-scale computing systems in general remain a source of concern. A potential solution involves using low-cost, low-power ARM processors in large arrays in a manner which provides massive parallelisation and high rates of data throughput (relative to existing large-scale computing designs). Giving greater priority to both throughput-rate and cost considerations increases the relevance of primary memory performance and design optimisations to overall system performance. Using several primary memory performance benchmarks to evaluate various aspects of RAM and cache performance, we provide characterisations of the performances of four different models of ARM-based system-on-chip, namely the Cortex-A9, Cortex- A7, Cortex-A15 r3p2 and Cortex-A15 r3p3. We then discuss the relevance of these results to high volume computing and the potential for ARM processors.
-
The Tier-1 resource center at the National Research Centre “Kurchatov Institute” for the experiments, ALICE, ATLAS and LHCb at the Large Hadron Collider (LHC)
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 621-630Views (last year): 2.The review of the distributed computing infrastructure of the Tier-1 sites for the Alice, ATLAS, LHCb experiments at the LHC is given. The special emphasis is placed on the main tasks and services of the Tier-1 site, which operates in the Kurchatov Institute in Moscow.
-
Views (last year): 7.
Nowadays cloud computing is an important topic in the field of information technology and computer system. Several companies and educational institutes have deployed cloud infrastructures to overcome their problems such as easy data access, software updates with minimal cost, large or unlimited storage, efficient cost factor, backup storage and disaster recovery, and some other benefits if compare with the traditional network infrastructures. The paper present the study of cloud computing technology for marine environmental data and processing. Cloud computing of marine environment information is proposed for the integration and sharing of marine information resources. It is highly desirable to perform empirical requiring numerous interactions with web servers and transfers of very large archival data files without affecting operational information system infrastructure. In this paper, we consider the cloud computing for virtual testbed to minimize the cost. That is related to real time infrastructure.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"