All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Computational algorithm for solving the nonlinear boundary-value problem of hydrogen permeability with dynamic boundary conditions and concentration-dependent diffusion coefficient
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1179-1193The article deals with the nonlinear boundary-value problem of hydrogen permeability corresponding to the following experiment. A membrane made of the target structural material heated to a sufficiently high temperature serves as the partition in the vacuum chamber. Degassing is performed in advance. A constant pressure of gaseous (molecular) hydrogen is built up at the inlet side. The penetrating flux is determined by mass-spectrometry in the vacuum maintained at the outlet side.
A linear model of dependence on concentration is adopted for the coefficient of dissolved atomic hydrogen diffusion in the bulk. The temperature dependence conforms to the Arrhenius law. The surface processes of dissolution and sorptiondesorption are taken into account in the form of nonlinear dynamic boundary conditions (differential equations for the dynamics of surface concentrations of atomic hydrogen). The characteristic mathematical feature of the boundary-value problem is that concentration time derivatives are included both in the diffusion equation and in the boundary conditions with quadratic nonlinearity. In terms of the general theory of functional differential equations, this leads to the so-called neutral type equations and requires a more complex mathematical apparatus. An iterative computational algorithm of second-(higher- )order accuracy is suggested for solving the corresponding nonlinear boundary-value problem based on explicit-implicit difference schemes. To avoid solving the nonlinear system of equations at every time step, we apply the explicit component of difference scheme to slower sub-processes.
The results of numerical modeling are presented to confirm the fitness of the model to experimental data. The degrees of impact of variations in hydrogen permeability parameters (“derivatives”) on the penetrating flux and the concentration distribution of H atoms through the sample thickness are determined. This knowledge is important, in particular, when designing protective structures against hydrogen embrittlement or membrane technologies for producing high-purity hydrogen. The computational algorithm enables using the model in the analysis of extreme regimes for structural materials (pressure drops, high temperatures, unsteady heating), identifying the limiting factors under specific operating conditions, and saving on costly experiments (especially in deuterium-tritium investigations).
-
The analysis of various design methods for production of housing parts by combined extrusion
Computer Research and Modeling, 2014, v. 6, no. 6, pp. 967-974Views (last year): 3. Citations: 7 (RSCI).The article contains review of various estimation methods of combined extrusion process for the representative part, also analytical calculations and numerical simulation of this process using program DEFORM 3D. The comparative analysis of the results obtained by different methods was made. The assumptions of the main factors having a significant effect on the reliability of the results were formulated.
-
Numerical investigations of mixing non-isothermal streams of sodium coolant in T-branch
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 95-110Views (last year): 3.Numerical investigation of mixing non-isothermal streams of sodium coolant in a T-branch is carried out in the FlowVision CFD software. This study is aimed at argumentation of applicability of different approaches to prediction of oscillating behavior of the flow in the mixing zone and simulation of temperature pulsations. The following approaches are considered: URANS (Unsteady Reynolds Averaged Navier Stokers), LES (Large Eddy Simulation) and quasi-DNS (Direct Numerical Simulation). One of the main tasks of the work is detection of the advantages and drawbacks of the aforementioned approaches.
Numerical investigation of temperature pulsations, arising in the liquid and T-branch walls from the mixing of non-isothermal streams of sodium coolant was carried out within a mathematical model assuming that the flow is turbulent, the fluid density does not depend on pressure, and that heat exchange proceeds between the coolant and T-branch walls. Model LMS designed for modeling turbulent heat transfer was used in the calculations within URANS approach. The model allows calculation of the Prandtl number distribution over the computational domain.
Preliminary study was dedicated to estimation of the influence of computational grid on the development of oscillating flow and character of temperature pulsation within the aforementioned approaches. The study resulted in formulation of criteria for grid generation for each approach.
Then, calculations of three flow regimes have been carried out. The regimes differ by the ratios of the sodium mass flow rates and temperatures at the T-branch inlets. Each regime was calculated with use of the URANS, LES and quasi-DNS approaches.
At the final stage of the work analytical comparison of numerical and experimental data was performed. Advantages and drawbacks of each approach to simulation of mixing non-isothermal streams of sodium coolant in the T-branch are revealed and formulated.
It is shown that the URANS approach predicts the mean temperature distribution with a reasonable accuracy. It requires essentially less computational and time resources compared to the LES and DNS approaches. The drawback of this approach is that it does not reproduce pulsations of velocity, pressure and temperature.
The LES and DNS approaches also predict the mean temperature with a reasonable accuracy. They provide oscillating solutions. The obtained amplitudes of the temperature pulsations exceed the experimental ones. The spectral power densities in the check points inside the sodium flow agree well with the experimental data. However, the expenses of the computational and time resources essentially exceed those for the URANS approach in the performed numerical experiments: 350 times for LES and 1500 times for ·DNS.
-
Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.
The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.
The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.
The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.
All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.
-
Calibration of an elastostatic manipulator model using AI-based design of experiment
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1535-1553This paper demonstrates the advantages of using artificial intelligence algorithms for the design of experiment theory, which makes possible to improve the accuracy of parameter identification for an elastostatic robot model. Design of experiment for a robot consists of the optimal configuration-external force pairs for the identification algorithms and can be described by several main stages. At the first stage, an elastostatic model of the robot is created, taking into account all possible mechanical compliances. The second stage selects the objective function, which can be represented by both classical optimality criteria and criteria defined by the desired application of the robot. At the third stage the optimal measurement configurations are found using numerical optimization. The fourth stage measures the position of the robot body in the obtained configurations under the influence of an external force. At the last, fifth stage, the elastostatic parameters of the manipulator are identified based on the measured data.
The objective function required to finding the optimal configurations for industrial robot calibration is constrained by mechanical limits both on the part of the possible angles of rotation of the robot’s joints and on the part of the possible applied forces. The solution of this multidimensional and constrained problem is not simple, therefore it is proposed to use approaches based on artificial intelligence. To find the minimum of the objective function, the following methods, also sometimes called heuristics, were used: genetic algorithms, particle swarm optimization, simulated annealing algorithm, etc. The obtained results were analyzed in terms of the time required to obtain the configurations, the optimal value, as well as the final accuracy after applying the calibration. The comparison showed the advantages of the considered optimization techniques based on artificial intelligence over the classical methods of finding the optimal value. The results of this work allow us to reduce the time spent on calibration and increase the positioning accuracy of the robot’s end-effector after calibration for contact operations with high loads, such as machining and incremental forming.
-
The use of cluster analysis methods for the study of a set of feasible solutions of the phase problem in biological crystallography
Computer Research and Modeling, 2010, v. 2, no. 1, pp. 91-101Views (last year): 2.X-ray diffraction experiment allows determining of magnitudes of complex coefficients in the decomposition of the studied electron density distribution into Fourier series. The determination of the lost in the experiment phase values poses the central problem of the method, namely the phase problem. Some methods for solving of the phase problem result in a set of feasible solutions. Cluster analysis method may be used to investigate the composition of this set and to extract one or several typical solutions. An essential feature of the approach is the estimation of the closeness of two solutions by the map correlation between two aligned Fourier syntheses calculated with the use of phase sets under comparison. An interactive computer program ClanGR was designed to perform this analysis.
-
Detection of influence of upper working roll’s vibrayion on thickness of sheet at cold rolling with the help of DEFORM-3D software
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 111-116Views (last year): 12. Citations: 1 (RSCI).Technical diagnosis’ current trends are connected to application of FEM computer simulation, which allows, to some extent, replace real experiments, reduce costs for investigation and minimize risks. Computer simulation, just at the stage of research and development, allows carrying out of diagnostics of equipment to detect permissible fluctuations of parameters of equipment’s work. Peculiarity of diagnosis of rolling equipment is that functioning of rolling equipment is directly tied with manufacturing of product with required quality, including accuracy. At that design of techniques of technical diagnosis and diagnostical modelling is very important. Computer simulation of cold rolling of strip was carried out. At that upper working roll was doing vibrations in horizontal direction according with published data of experiments on continuous 1700 rolling mill. Vibration of working roll in a stand appeared due to gap between roll’s craft and guide in a stand and led to periodical fluctuations of strip’s thickness. After computer simulation with the help of DEFORM software strip with longitudinal and transversal thickness variation was gotten. Visualization of strip’s geometrical parameters, according with simulation data, corresponded to type of inhomogeneity of surface of strip rolled in real. Further analysis of thickness variation was done in order to identify, on the basis of simulation, sources of periodical components of strip’s thickness, whose reasons are malfunctions of equipment. Advantage of computer simulation while searching the sources of forming of thickness variation is that different hypothesis concerning thickness formations may be tested without conducting real experiments and costs of different types may be reduced. Moreover, while simulation, initial strip’s thickness will not have fluctuations as opposed to industrial or laboratorial experiments. On the basis of spectral analysis of random process, it was established that frequency of changing of strip’s thickness after rolling in one stand coincides with frequency of working roll’s vibration. Results of computer simulation correlate with results of the researches for 1700 mill. Therefore, opportunity to apply computer simulation to find reasons of formation of thickness variation of strip on the industrial rolling mill is shown.
-
On the boundaries of optimally designed elastoplastic structures
Computer Research and Modeling, 2017, v. 9, no. 3, pp. 503-515Views (last year): 8.This paper studies minimum volume elastoplastic bodies. One part of the boundary of every reviewed body is fixed to the same space points while stresses are set for the remaining part of the boundary surface (loaded surface). The shape of the loaded surface can change in space but the limit load factor calculated based on the assumption that the bodies are filled with elastoplastic medium must not be less than a fixed value. Besides, all varying bodies are supposed to have some type of a limited volume sample manifold inside of them.
The following problem has been set: what is the maximum number of cavities (or holes in a two-dimensional case) that a minimum volume body (plate) can have under the above limitations? It is established that in order to define a mathematically correct problem, two extra conditions have to be met: the areas of the holes must be bigger than the small constant while the total length of the internal hole contour lines within the optimum figure must be minimum among the varying bodies. Thus, unlike most articles on optimum design of elastoplastic structures where parametric analysis of acceptable solutions is done with the set topology, this paper looks for the topological parameter of the design connectivity.
The paper covers the case when the load limit factor for the sample manifold is quite large while the areas of acceptable holes in the varying plates are bigger than the small constant. The arguments are brought forward that prove the Maxwell and Michell beam system to be the optimum figure under these conditions. As an example, microphotographs of the standard biological bone tissues are presented. It is demonstrated that internal holes with large areas cannot be a part of the Michell system. At the same the Maxwell beam system can include holes with significant areas. The sufficient conditions are given for the hole formation within the solid plate of optimum volume. The results permit generalization for three-dimensional elastoplastic structures.
The paper concludes with the setting of mathematical problems arising from the new problem optimally designed elastoplastic systems.
-
Determination of CT dose by means of noise analysis
Computer Research and Modeling, 2018, v. 10, no. 4, pp. 525-533Views (last year): 23. Citations: 1 (RSCI).The article deals with the process of creating an effective algorithm for determining the amount of emitted quanta from an X-ray tube in computer tomography (CT) studies. An analysis of domestic and foreign literature showed that most of the work in the field of radiometry and radiography takes the tabulated values of X-ray absorption coefficients into account, while individual dose factors are not taken into account at all since many studies are lacking the Dose Report. Instead, an average value is used to simplify the calculation of statistics. In this regard, it was decided to develop a method to detect the amount of ionizing quanta by analyzing the noise of CT data. As the basis of the algorithm, we used Poisson and Gauss distribution mathematical model of owns’ design of logarithmic value. The resulting mathematical model was tested on the CT data of a calibration phantom consisting of three plastic cylinders filled with water, the X-ray absorption coefficient of which is known from the table values. The data were obtained from several CT devices from different manufacturers (Siemens, Toshiba, GE, Phillips). The developed algorithm made it possible to calculate the number of emitted X-ray quanta per unit time. These data, taking into account the noise level and the radiuses of the cylinders, were converted to X-ray absorption values, after which a comparison was made with tabulated values. As a result of this operation, the algorithm used with CT data of various configurations, experimental data were obtained, consistent with the theoretical part and the mathematical model. The results showed good accuracy of the algorithm and mathematical apparatus, which shows reliability of the obtained data. This mathematical model is already used in the noise reduction program of the CT of own design, where it participates as a method of creating a dynamic threshold of noise reduction. At the moment, the algorithm is being processed to work with real data from computer tomography of patients.
-
Model for operational optimal control of financial recourses distribution in a company
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 343-358Views (last year): 33.A critical analysis of existing approaches, methods and models to solve the problem of financial resources operational management has been carried out in the article. A number of significant shortcomings of the presented models were identified, limiting the scope of their effective usage. There are a static nature of the models, probabilistic nature of financial flows are not taken into account, daily amounts of receivables and payables that significantly affect the solvency and liquidity of the company are not identified. This necessitates the development of a new model that reflects the essential properties of the planning financial flows system — stochasticity, dynamism, non-stationarity.
The model for the financial flows distribution has been developed. It bases on the principles of optimal dynamic control and provides financial resources planning ensuring an adequate level of liquidity and solvency of a company and concern initial data uncertainty. The algorithm for designing the objective cash balance, based on principles of a companies’ financial stability ensuring under changing financial constraints, is proposed.
Characteristic of the proposed model is the presentation of the cash distribution process in the form of a discrete dynamic process, for which a plan for financial resources allocation is determined, ensuring the extremum of an optimality criterion. Designing of such plan is based on the coordination of payments (cash expenses) with the cash receipts. This approach allows to synthesize different plans that differ in combinations of financial outflows, and then to select the best one according to a given criterion. The minimum total costs associated with the payment of fines for non-timely financing of expenses were taken as the optimality criterion. Restrictions in the model are the requirement to ensure the minimum allowable cash balances for the subperiods of the planning period, as well as the obligation to make payments during the planning period, taking into account the maturity of these payments. The suggested model with a high degree of efficiency allows to solve the problem of financial resources distribution under uncertainty over time and receipts, coordination of funds inflows and outflows. The practical significance of the research is in developed model application, allowing to improve the financial planning quality, to increase the management efficiency and operational efficiency of a company.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"