All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Detection of influence of upper working roll’s vibrayion on thickness of sheet at cold rolling with the help of DEFORM-3D software
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 111-116Views (last year): 12. Citations: 1 (RSCI).Technical diagnosis’ current trends are connected to application of FEM computer simulation, which allows, to some extent, replace real experiments, reduce costs for investigation and minimize risks. Computer simulation, just at the stage of research and development, allows carrying out of diagnostics of equipment to detect permissible fluctuations of parameters of equipment’s work. Peculiarity of diagnosis of rolling equipment is that functioning of rolling equipment is directly tied with manufacturing of product with required quality, including accuracy. At that design of techniques of technical diagnosis and diagnostical modelling is very important. Computer simulation of cold rolling of strip was carried out. At that upper working roll was doing vibrations in horizontal direction according with published data of experiments on continuous 1700 rolling mill. Vibration of working roll in a stand appeared due to gap between roll’s craft and guide in a stand and led to periodical fluctuations of strip’s thickness. After computer simulation with the help of DEFORM software strip with longitudinal and transversal thickness variation was gotten. Visualization of strip’s geometrical parameters, according with simulation data, corresponded to type of inhomogeneity of surface of strip rolled in real. Further analysis of thickness variation was done in order to identify, on the basis of simulation, sources of periodical components of strip’s thickness, whose reasons are malfunctions of equipment. Advantage of computer simulation while searching the sources of forming of thickness variation is that different hypothesis concerning thickness formations may be tested without conducting real experiments and costs of different types may be reduced. Moreover, while simulation, initial strip’s thickness will not have fluctuations as opposed to industrial or laboratorial experiments. On the basis of spectral analysis of random process, it was established that frequency of changing of strip’s thickness after rolling in one stand coincides with frequency of working roll’s vibration. Results of computer simulation correlate with results of the researches for 1700 mill. Therefore, opportunity to apply computer simulation to find reasons of formation of thickness variation of strip on the industrial rolling mill is shown.
-
High-throughput identification of hydride phase-change kinetics models
Computer Research and Modeling, 2020, v. 12, no. 1, pp. 171-183Metal hydrides are an interesting class of chemical compounds that can reversibly bind a large amount of hydrogen and are, therefore, of interest for energy applications. Understanding the factors affecting the kinetics of hydride formation and decomposition is especially important. Features of the material, experimental setup and conditions affect the mathematical description of the processes, which can undergo significant changes during the processing of experimental data. The article proposes a general approach to numerical modeling of the formation and decomposition of metal hydrides and solving inverse problems of estimating material parameters from measurement data. The models are divided into two classes: diffusive ones, that take into account the gradient of hydrogen concentration in the metal lattice, and models with fast diffusion. The former are more complex and take the form of non-classical boundary value problems of parabolic type. A rather general approach to the grid solution of such problems is described. The second ones are solved relatively simply, but can change greatly when model assumptions change. Our experience in processing experimental data shows that a flexible software tool is needed; a tool that allows, on the one hand, building models from standard blocks, freely changing them if necessary, and, on the other hand, avoiding the implementation of routine algorithms. It also should be adapted for high-performance systems of different paradigms. These conditions are satisfied by the HIMICOS library presented in the paper, which has been tested on a large number of experimental data. It allows simulating the kinetics of formation and decomposition of metal hydrides, as well as related tasks, at three levels of abstraction. At the low level, the user defines the interface procedures, such as calculating the time layer based on the previous layer or the entire history, calculating the observed value and the independent variable from the task variables, comparing the curve with the reference. Special algorithms can be used for solving quite general parabolic-type boundary value problems with free boundaries and with various quasilinear (i.e., linear with respect to the derivative only) boundary conditions, as well as calculating the distance between the curves in different metric spaces and with different normalization. This is the middle level of abstraction. At the high level, it is enough to choose a ready tested model for a particular material and modify it in relation to the experimental conditions.
-
The applicability of the approximation of single scattering in pulsed sensing of an inhomogeneous medium
Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1063-1079The mathematical model based on the linear integro-differential Boltzmann equation is considered in this article. The model describes the radiation transfer in the scattering medium irradiated by a point source. The inverse problem for the transfer equation is defined. This problem consists of determining the scattering coefficient from the time-angular distribution of the radiation flux density at a given point in space. The Neumann series representation for solving the radiation transfer equation is analyzed in the study of the inverse problem. The zero member of the series describes the unscattered radiation, the first member of the series describes a single-scattered field, the remaining members of the series describe a multiple-scattered field. When calculating the approximate solution of the radiation transfer equation, the single scattering approximation is widespread to calculated an approximate solution of the equation for regions with a small optical thickness and a low level of scattering. An analytical formula is obtained for finding the scattering coefficient by using this approximation for problem with additional restrictions on the initial data. To verify the adequacy of the obtained formula the Monte Carlo weighted method for solving the transfer equation is constructed and software implemented taking into account multiple scattering in the medium and the space-time singularity of the radiation source. As applied to the problems of high-frequency acoustic sensing in the ocean, computational experiments were carried out. The application of the single scattering approximation is justified, at least, at a sensing range of about one hundred meters and the double and triple scattered fields make the main impact on the formula error. For larger regions, the single scattering approximation gives at the best only a qualitative evaluation of the medium structure, sometimes it even does not allow to determine the order of the parameters quantitative characteristics of the interaction of radiation with matter.
-
Methodology of aircraft icing calculation in a wide range of climate and speed parameters. Applicability within the NLG-25 airworthiness standards
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 957-978Certifying a transport airplane for the flights under icing conditions in Russia was carried out within the framework of the requirements of Annex С to the AP-25 Aviation Rules. In force since 2023 to replace AP-25 the new Russian certification document “Airworthiness Standards” (NLG-25) proposes the introduction of Appendix O. A feature of Appendix O is the need to carry out calculations in conditions of high liquid water content and with large water drops (500 microns or more). With such parameters of the dispersed flow, such physical processes as the disruption and splashing of a water film when large drops enter it become decisive. The flow of a dispersed medium under such conditions is essentially polydisperse. This paper describes the modifications of the IceVision technique implemented on the basis of the FlowVision software package for the ice accretion calculations within the framework of Appendix O.
The main difference between the IceVision method and the known approaches is the use of the Volume of fluid (VOF) technology to the shape of ice changes tracking. The external flow around the aircraft is calculated simultaneously with the growth of ice and its heating. Ice is explicitly incorporated in the computational domain; the heat transfer equation is solved in it. Unlike the Lagrangian approaches, the Euler computational grid is not completely rebuilt in the IceVision technique: only the cells containing the contact surface are changed.
The IceVision 2.0 version accounts for stripping the film, as well as bouncing and splashing of falling drops at the surfaces of the aircraft and ice. The diameter of secondary droplets is calculated using known empirical correlations. The speed of the water film flow over the surface is determined taking into account the action of aerodynamic forces, gravity, hydrostatic pressure gradient and surface tension force. The result of taking into account surface tension is the effect of contraction of the film, which leads to the formation of water flows in the form of rivulets and ice deposits in the form of comb-like growths. An energy balance relation is fulfilled on the ice surface that takes into account the energy of falling drops, heat exchange between ice and air, the heat of crystallization, evaporation, sublimation and condensation. The paper presents the results of solving benchmark and model problems, demonstrating the effectiveness of the IceVision technique and the reliability of the obtained results.
-
Running applications on a hybrid cluster
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 475-483Views (last year): 4.A hybrid cluster implies the use of computational devices with radically different architectures. Usually, these are conventional CPU architecture (e.g. x86_64) and GPU architecture (e. g. NVIDIA CUDA). Creating and exploiting such a cluster requires some experience: in order to harness all computational power of the described system and get substantial speedup for computational tasks many factors should be taken into account. These factors consist of hardware characteristics (e.g. network infrastructure, a type of data storage, GPU architecture) as well as software stack (e.g. MPI implementation, GPGPU libraries). So, in order to run scientific applications GPU capabilities, software features, task size and other factors should be considered.
This report discusses opportunities and problems of hybrid computations. Some statistics from tests programs and applications runs will be demonstrated. The main focus of interest is open source applications (e. g. OpenFOAM) that support GPGPU (with some parts rewritten to use GPGPU directly or by replacing libraries).
There are several approaches to organize heterogeneous computations for different GPU architectures out of which CUDA library and OpenCL framework are compared. CUDA library is becoming quite typical for hybrid systems with NVIDIA cards, but OpenCL offers portability opportunities which can be a determinant factor when choosing framework for development. We also put emphasis on multi-GPU systems that are often used to build hybrid clusters. Calculations were performed on a hybrid cluster of SPbU computing center.
-
Biomathematical system of the nucleic acids description
Computer Research and Modeling, 2020, v. 12, no. 2, pp. 417-434The article is devoted to the application of various methods of mathematical analysis, search for patterns and studying the composition of nucleotides in DNA sequences at the genomic level. New methods of mathematical biology that made it possible to detect and visualize the hidden ordering of genetic nucleotide sequences located in the chromosomes of cells of living organisms described. The research was based on the work on algebraic biology of the doctor of physical and mathematical sciences S. V. Petukhov, who first introduced and justified new algebras and hypercomplex numerical systems describing genetic phenomena. This paper describes a new phase in the development of matrix methods in genetics for studying the properties of nucleotide sequences (and their physicochemical parameters), built on the principles of finite geometry. The aim of the study is to demonstrate the capabilities of new algorithms and discuss the discovered properties of genetic DNA and RNA molecules. The study includes three stages: parameterization, scaling, and visualization. Parametrization is the determination of the parameters taken into account, which are based on the structural and physicochemical properties of nucleotides as elementary components of the genome. Scaling plays the role of “focusing” and allows you to explore genetic structures at various scales. Visualization includes the selection of the axes of the coordinate system and the method of visual display. The algorithms presented in this work are put forward as a new toolkit for the development of research software for the analysis of long nucleotide sequences with the ability to display genomes in parametric spaces of various dimensions. One of the significant results of the study is that new criteria were obtained for the classification of the genomes of various living organisms to identify interspecific relationships. The new concept allows visually and numerically assessing the variability of the physicochemical parameters of nucleotide sequences. This concept also allows one to substantiate the relationship between the parameters of DNA and RNA molecules with fractal geometric mosaics, reveals the ordering and symmetry of polynucleotides, as well as their noise immunity. The results obtained justified the introduction of new terms: “genometry” as a methodology of computational strategies and “genometrica” as specific parameters of a particular genome or nucleotide sequence. In connection with the results obtained, biosemiotics and hierarchical levels of organization of living matter are raised.
-
Development of acoustic-vortex decomposition method for car tyre noise modelling
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 979-993Road noise is one of the key issues in maintaining high environmental standards. At speeds between 50 and 120 km/h, tires are the main source of noise generated by a moving vehicle. It is well known that either the interaction between the tire tread and the road surface or some internal dynamic effects are responsible for tire noise and vibration. This paper discusses the application of a new method for modelling the generation and propagation of sound during tire motion, based on the application of the so-called acoustic-vortex decomposition. Currently, the application of the Lighthill equation and the aeroacoustics analogy are the main approaches used to model tire noise. The aeroacoustics analogy, in solving the problem of separating acoustic and vortex (pseudo-sound) modes of vibration, is not a mathematically rigorous formulation for deriving the source (righthand side) of the acoustic wave equation. In the development of the acoustic-vortex decomposition method, a mathematically rigorous transformation of the equations of motion of a compressible medium is performed to obtain an inhomogeneous wave equation with respect to static enthalpy pulsations with a source term that de-pends on the velocity field of the vortex mode. In this case, the near-field pressure fluctuations are the sum of acoustic fluctuations and pseudo-sound. Thus, the acoustic-vortex decomposition method allows to adequately modeling the acoustic field and the dynamic loads that generate tire vibration, providing a complete solution to the problem of modelling tire noise, which is the result of its turbulent flow with the generation of vortex sound, as well as the dynamic loads and noise emission due to tire vibration. The method is first implemented and test-ed in the FlowVision software package. The results obtained with FlowVision are compared with those obtained with the LMS Virtual.Lab Acoustics package and a number of differences in the acoustic field are highlighted.
-
Methodic of legacy information systems handling
Computer Research and Modeling, 2014, v. 6, no. 2, pp. 331-344Views (last year): 3. Citations: 1 (RSCI).In this article a method of legacy information systems handling is offered. During professional activities of specialists of various domains of industry they face with the problem that computer software that was involved in product development stage becomes obsolete much quickly than the product itself. At the same time switch to any modern software might be not possible due to various reasons. This problem is known as "legacy system" problem. It appears when product lifecycle is sufficiently longer than that of software systems that were used for product creation. In this article author offers an approach for solving this problem along with computer application based on this approach.
-
Performance of the OpenMP and MPI implementations on ultrasparc system
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 485-491Views (last year): 2.This paper targets programmers and developers interested in utilizing parallel programming techniques to enhance application performance. The Oracle Solaris Studio software provides state-of-the-art optimizing and parallelizing compilers for C, C++ and Fortran, an advanced debugger, and optimized mathematical and performance libraries. Also included are an extremely powerful performance analysis tool for profiling serial and parallel applications, a thread analysis tool to detect data races and deadlock in memory parallel programs, and an Integrated Development Environment (IDE). The Oracle Message Passing Toolkit software provides the high-performance MPI libraries and associated run-time environment needed for message passing applications that can run on a single system or across multiple compute systems connected with high performance networking, including Gigabit Ethernet, 10 Gigabit Ethernet, InfiniBand and Myrinet. Examples of OpenMP and MPI are provided throughout the paper, including their usage via the Oracle Solaris Studio and Oracle Message Passing Toolkit products for development and deployment of both serial and parallel applications on SPARC and x86/x64 based systems. Throughout this paper it is demonstrated how to develop and deploy an application parallelized with OpenMP and/or MPI.
-
Simulation of forming of UFG Ti-6-4 alloy at low temperature of superplasticity
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 127-133Views (last year): 10.Superplastic forming of Ni and Ti based alloys is widely used in aerospace industry. The main advantage of using the effect of superplasticity in sheet metal forming processes is a feasibility of forming materials with a high amount of plastic strain in conditions of prevailing tensile stresses. This article is dedicated to study commercial FEM software SFTC DEFORM application for prediction thickness deviation during low temperature superplastic forming of UFG Ti-6-4 alloy. Experimentally, thickness deviation during superplastic forming can be observed in the local area of plastic deformation and this process is aggravated by local softening of the metal and this is stipulated by microstructure coarsening. The theoretical model was prepared to analyze experimentally observed metal flow. Two approaches have been used for that. The first one is the using of integrated creep rheology model in DEFORM. As superplastic effect is observed only in materials with fine and ultrafine grain sizes the second approach is carried out using own user procedures for rheology model which is based on microstructure evolution equations. These equations have been implemented into DEFORM via Fortran user’s solver subroutines. Using of FEM simulation for this type of forming allows tracking a strain rate in different parts of a workpiece during a process, which is crucial for maintaining the superplastic conditions. Comparison of these approaches allows us to make conclusions about effect of microstructure evolution on metal flow during superplastic deformation. The results of the FEM analysis and theoretical conclusions have been approved by results of the conducted Erichsen test. The main issues of this study are as follows: a) the DEFORM software allows an engineer to predict formation of metal shape under the condition of low-temperature superplasticity; b) in order to augment the accuracy of the prediction of local deformations, the effect of the microstructure state of an alloy having sub-microcristalline structure should be taken into account in the course of calculations in the DEFORM software.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"