Результаты поиска по 'mathematical model of measurement':
Найдено статей: 34
  1. Chernov I.A.
    High-throughput identification of hydride phase-change kinetics models
    Computer Research and Modeling, 2020, v. 12, no. 1, pp. 171-183

    Metal hydrides are an interesting class of chemical compounds that can reversibly bind a large amount of hydrogen and are, therefore, of interest for energy applications. Understanding the factors affecting the kinetics of hydride formation and decomposition is especially important. Features of the material, experimental setup and conditions affect the mathematical description of the processes, which can undergo significant changes during the processing of experimental data. The article proposes a general approach to numerical modeling of the formation and decomposition of metal hydrides and solving inverse problems of estimating material parameters from measurement data. The models are divided into two classes: diffusive ones, that take into account the gradient of hydrogen concentration in the metal lattice, and models with fast diffusion. The former are more complex and take the form of non-classical boundary value problems of parabolic type. A rather general approach to the grid solution of such problems is described. The second ones are solved relatively simply, but can change greatly when model assumptions change. Our experience in processing experimental data shows that a flexible software tool is needed; a tool that allows, on the one hand, building models from standard blocks, freely changing them if necessary, and, on the other hand, avoiding the implementation of routine algorithms. It also should be adapted for high-performance systems of different paradigms. These conditions are satisfied by the HIMICOS library presented in the paper, which has been tested on a large number of experimental data. It allows simulating the kinetics of formation and decomposition of metal hydrides, as well as related tasks, at three levels of abstraction. At the low level, the user defines the interface procedures, such as calculating the time layer based on the previous layer or the entire history, calculating the observed value and the independent variable from the task variables, comparing the curve with the reference. Special algorithms can be used for solving quite general parabolic-type boundary value problems with free boundaries and with various quasilinear (i.e., linear with respect to the derivative only) boundary conditions, as well as calculating the distance between the curves in different metric spaces and with different normalization. This is the middle level of abstraction. At the high level, it is enough to choose a ready tested model for a particular material and modify it in relation to the experimental conditions.

  2. Laser damage to transparent solids is a major limiting factor output power of laser systems. For laser rangefinders, the most likely destruction cause of elements of the optical system (lenses, mirrors) actually, as a rule, somewhat dusty, is not an optical breakdown as a result of avalanche, but such a thermal effect on the dust speck deposited on an element of the optical system (EOS), which leads to its ignition. It is the ignition of a speck of dust that initiates the process of EOS damage.

    The corresponding model of this process leading to the ignition of a speck of dust takes into account the nonlinear Stefan –Boltzmann law of thermal radiation and the infinite thermal effect of periodic radiation on the EOS and the speck of dust. This model is described by a nonlinear system of differential equations for two functions: the EOS temperature and the dust particle temperature. It is proved that due to the accumulating effect of periodic thermal action, the process of reaching the dust speck ignition temperature occurs almost at any a priori possible changes in this process of the thermophysical parameters of the EOS and the dust speck, as well as the heat exchange coefficients between them and the surrounding air. Averaging these parameters over the variables related to both the volume and the surfaces of the dust speck and the EOS is correct under the natural constraints specified in the paper. The entire really significant spectrum of thermophysical parameters is covered thanks to the use of dimensionless units in the problem (including numerical results).

    A thorough mathematical study of the corresponding nonlinear system of differential equations made it possible for the first time for the general case of thermophysical parameters and characteristics of the thermal effect of periodic laser radiation to find a formula for the value of the permissible radiation intensity that does not lead to the destruction of the EOS as a result of the ignition of a speck of dust deposited on the EOS. The theoretical value of the permissible intensity found in the general case in the special case of the data from the Grasse laser ranging station (south of France) almost matches that experimentally observed in the observatory.

    In parallel with the solution of the main problem, we derive a formula for the power absorption coefficient of laser radiation by an EOS expressed in terms of four dimensionless parameters: the relative intensity of laser radiation, the relative illumination of the EOS, the relative heat transfer coefficient from the EOS to the surrounding air, and the relative steady-state temperature of the EOS.

  3. Lukyantsev D.S., Afanasiev N.T., Tanaev A.B., Chudaev S.O.
    Numerical-analytical modeling of gravitational lensing of the electromagnetic waves in random-inhomogeneous space plasma
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 433-443

    Instrument of numerical-analytical modeling of characteristics of propagation of electromagnetic waves in chaotic space plasma with taking into account effects of gravitation is developed for interpretation of data of measurements of astrophysical precision instruments of new education. The task of propagation of waves in curved (Riemann’s) space is solved in Euclid’s space by introducing of the effective index of refraction of vacuum. The gravitational potential can be calculated for various model of distribution of mass of astrophysical objects and at solution of Poisson’s equation. As a result the effective index of refraction of vacuum can be evaluated. Approximate model of the effective index of refraction is suggested with condition that various objects additively contribute in total gravitational field. Calculation of the characteristics of electromagnetic waves in the gravitational field of astrophysical objects is performed by the approximation of geometrical optics with condition that spatial scales of index of refraction a lot more wavelength. Light differential equations in Euler’s form are formed the basis of numerical-analytical instrument of modeling of trajectory characteristic of waves. Chaotic inhomogeneities of space plasma are introduced by model of spatial correlation function of index of refraction. Calculations of refraction scattering of waves are performed by the approximation of geometrical optics. Integral equations for statistic moments of lateral deviations of beams in picture plane of observer are obtained. Integrals for moments are reduced to system of ordinary differential equations the firsts order with using analytical transformations for cooperative numerical calculation of arrange and meansquare deviations of light. Results of numerical-analytical modeling of trajectory picture of propagation of electromagnetic waves in interstellar space with taking into account impact of gravitational fields of space objects and refractive scattering of waves on inhomogeneities of index of refraction of surrounding plasma are shown. Based on the results of modeling quantitative estimation of conditions of stochastic blurring of the effect of gravitational lensing of electromagnetic waves at various frequency ranges is performed. It’s shown that operating frequencies of meter range of wavelengths represent conditional low-frequency limit for observational of the effect of gravitational lensing in stochastic space plasma. The offered instrument of numerical-analytical modeling can be used for analyze of structure of electromagnetic radiation of quasar propagating through group of galactic.

  4. Safiullina L.F., Gubaydullin I.M.
    Analysis of the identifiability of the mathematical model of propane pyrolysis
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1045-1057

    The article presents the numerical modeling and study of the kinetic model of propane pyrolysis. The study of the reaction kinetics is a necessary stage in modeling the dynamics of the gas flow in the reactor.

    The kinetic model of propane pyrolysis is a nonlinear system of ordinary differential equations of the first order with parameters, the role of which is played by the reaction rate constants. Math modeling of processes is based on the use of the mass conservation law. To solve an initial (forward) problem, implicit methods for solving stiff ordinary differential equation systems are used. The model contains 60 input kinetic parameters and 17 output parameters corresponding to the reaction substances, of which only 9 are observable. In the process of solving the problem of estimating parameters (inverse problem), there is a question of non-uniqueness of the set of parameters that satisfy the experimental data. Therefore, before solving the inverse problem, the possibility of determining the parameters of the model is analyzed (analysis of identifiability).

    To analyze identifiability, we use the orthogonal method, which has proven itself well for analyzing models with a large number of parameters. The algorithm is based on the analysis of the sensitivity matrix by the methods of differential and linear algebra, which shows the degree of dependence of the unknown parameters of the models on the given measurements. The analysis of sensitivity and identifiability showed that the parameters of the model are stably determined from a given set of experimental data. The article presents a list of model parameters from most to least identifiable. Taking into account the analysis of the identifiability of the mathematical model, restrictions were introduced on the search for less identifiable parameters when solving the inverse problem.

    The inverse problem of estimating the parameters was solved using a genetic algorithm. The article presents the found optimal values of the kinetic parameters. A comparison of the experimental and calculated dependences of the concentrations of propane, main and by-products of the reaction on temperature for different flow rates of the mixture is presented. The conclusion about the adequacy of the constructed mathematical model is made on the basis of the correspondence of the results obtained to physicochemical laws and experimental data.

  5. Ansori Moch.F., Sumarti N.N., Sidarto K.A., Gunadi I.I.
    An Algorithm for Simulating the Banking Network System and Its Application for Analyzing Macroprudential Policy
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1275-1289

    Modeling banking systems using a network approach has received growing attention in recent years. One of the notable models is that developed by Iori et al, who proposed a banking system model for analyzing systemic risks in interbank networks. The model is built based on the simple dynamics of several bank balance sheet variables such as deposit, equity, loan, liquid asset, and interbank lending (or borrowing) in the form of difference equations. Each bank faces random shocks in deposits and loans. The balance sheet is updated at the beginning or end of each period. In the model, banks are grouped into either potential lenders or borrowers. The potential borrowers are those that have lack of liquidity and the potential lenders are those which have excess liquids after dividend payment and channeling new investment. The borrowers and the lenders are connected through the interbank market. Those borrowers have some percentage of linkage to random potential lenders for borrowing funds to maintain their safety net of the liquidity. If the demand for borrowing funds can meet the supply of excess liquids, then the borrower bank survives. If not, they are deemed to be in default and will be removed from the banking system. However, in their paper, most part of the interbank borrowing-lending mechanism is described qualitatively rather than by detailed mathematical or computational analysis. Therefore, in this paper, we enhance the mathematical parts of borrowing-lending in the interbank market and present an algorithm for simulating the model. We also perform some simulations to analyze the effects of the model’s parameters on banking stability using the number of surviving banks as the measure. We apply this technique to analyze the effects of a macroprudential policy called loan-to-deposit ratio based reserve requirement for banking stability.

  6. Matveev A.V.
    Modeling the kinetics of radiopharmaceuticals with iodine isotopes in nuclear medicine problems
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 883-905

    Radiopharmaceuticals with iodine radioisotopes are now widely used in imaging and non-imaging methods of nuclear medicine. When evaluating the results of radionuclide studies of the structural and functional state of organs and tissues, parallel modeling of the kinetics of radiopharmaceuticals in the body plays an important role. The complexity of such modeling lies in two opposite aspects. On the one hand, excessive simplification of the anatomical and physiological characteristics of the organism when splitting it to the compartments that may result in the loss or distortion of important clinical diagnosis information, on the other – excessive, taking into account all possible interdependencies of the functioning of the organs and systems that, on the contrary, will lead to excess amount of absolutely useless for clinical interpretation of the data or the mathematical model becomes even more intractable. Our work develops a unified approach to the construction of mathematical models of the kinetics of radiopharmaceuticals with iodine isotopes in the human body during diagnostic and therapeutic procedures of nuclear medicine. Based on this approach, three- and four-compartment pharmacokinetic models were developed and corresponding calculation programs were created in the C++ programming language for processing and evaluating the results of radionuclide diagnostics and therapy. Various methods for identifying model parameters based on quantitative data from radionuclide studies of the functional state of vital organs are proposed. The results of pharmacokinetic modeling for radionuclide diagnostics of the liver, kidney, and thyroid using iodine-containing radiopharmaceuticals are presented and analyzed. Using clinical and diagnostic data, individual pharmacokinetic parameters of transport of different radiopharmaceuticals in the body (transport constants, half-life periods, maximum activity in the organ and the time of its achievement) were determined. It is shown that the pharmacokinetic characteristics for each patient are strictly individual and cannot be described by averaged kinetic parameters. Within the framework of three pharmacokinetic models, “Activity–time” relationships were obtained and analyzed for different organs and tissues, including for tissues in which the activity of a radiopharmaceutical is impossible or difficult to measure by clinical methods. Also discussed are the features and the results of simulation and dosimetric planning of radioiodine therapy of the thyroid gland. It is shown that the values of absorbed radiation doses are very sensitive to the kinetic parameters of the compartment model. Therefore, special attention should be paid to obtaining accurate quantitative data from ultrasound and thyroid radiometry and identifying simulation parameters based on them. The work is based on the principles and methods of pharmacokinetics. For the numerical solution of systems of differential equations of the pharmacokinetic models we used Runge–Kutta methods and Rosenbrock method. The Hooke–Jeeves method was used to find the minimum of a function of several variables when identifying modeling parameters.

  7. Shpitonkov M.I.
    Application of correlation adaptometry technique to sports and biomedical research
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 345-354

    The paper outlines the approaches to mathematical modeling correlation adaptometry techniques widely used in biology and medicine. The analysis is based on models employed in descriptions of structured biological systems. It is assumed that the distribution density of the biological population numbers satisfies the equation of Kolmogorov-Fokker-Planck. Using this technique evaluated the effectiveness of treatment of patients with obesity. All patients depending on the obesity degree and the comorbidity nature were divided into three groups. Shows a decrease in weight of the correlation graph computed from the measured in the patients of the indicators that characterizes the effectiveness of the treatment for all studied groups. This technique was also used to assess the intensity of the training loads in academic rowing three age groups. It was shown that with the highest voltage worked with athletes for youth group. Also, using the technique of correlation adaptometry evaluated the effectiveness of the treatment of hormone replacement therapy in women. All the patients depending on the assigned drug were divided into four groups. In the standard analysis of the dynamics of mean values of indicators, it was shown that in the course of the treatment were observed normalization of the averages for all groups of patients. However, using the technique of correlation adaptometry it was found that during the first six months the weight of the correlation graph was decreasing and during the second six months the weight increased for all study groups. This indicates the excessive length of the annual course of hormone replacement therapy and the practicality of transition to a semiannual rate.

    Views (last year): 10.
  8. Shumov V.V.
    Mathematical models of combat and military operations
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 907-920

    Modeling the fight against terrorist, pirate and robbery acts at sea is an urgent scientific task due to the prevalence of force acts and the insufficient number of works on this issue. The actions of pirates and terrorists are diverse. Using a base ship, they can attack ships up to 450–500 miles from the coast. Having chosen the target, they pursue it and use the weapons to board the ship. Actions to free a ship captured by pirates or terrorists include: blocking the ship, predicting where pirates might be on the ship, penetrating (from board to board, by air or from under water) and cleaning up the ship’s premises. An analysis of the special literature on the actions of pirates and terrorists showed that the act of force (and actions to neutralize it) consists of two stages: firstly, blocking the vessel, which consists in forcing it to stop, and secondly, neutralizing the team (terrorist groups, pirates), including penetration of a ship (ship) and its cleaning. The stages of the cycle are matched by indicators — the probability of blocking and the probability of neutralization. The variables of the act of force model are the number of ships (ships, boats) of the attackers and defenders, as well as the strength of the capture group of the attackers and the crew of the ship - the victim of the attack. Model parameters (indicators of naval and combat superiority) were estimated using the maximum likelihood method using an international database of incidents at sea. The values of these parameters are 7.6–8.5. Such high values of superiority parameters reflect the parties' ability to act in force acts. An analytical method for calculating excellence parameters is proposed and statistically substantiated. The following indicators are taken into account in the model: the ability of the parties to detect the enemy, the speed and maneuverability characteristics of the vessels, the height of the vessel and the characteristics of the boarding equipment, the characteristics of weapons and protective equipment, etc. Using the Becker model and the theory of discrete choice, the probability of failure of the force act is estimated. The significance of the obtained models for combating acts of force in the sea space lies in the possibility of quantitative substantiation of measures to protect the ship from pirate and terrorist attacks and deterrence measures aimed at preventing attacks (the presence of armed guards on board the ship, assistance from warships and helicopters).

  9. Kalachin S.V.
    Fuzzy modeling of human susceptibility to panic situations
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 203-218

    The study of the mechanism for the development of mass panic in view of its extreme importance and social danger is an important scientific task. Available information about the mechanism of her development is based mainly on the work of psychologists and belongs to the category of inaccurate. Therefore, the theory of fuzzy sets has been chosen as a tool for developing a mathematical model of a person's susceptibility to panic situations. As a result of the study, an fuzzy model was developed, consisting of blocks: “Fuzzyfication”, where the degree of belonging of the values of the input parameters to fuzzy sets is calculated; “Inference” where, based on the degree of belonging of the input parameters, the resulting function of belonging of the output value to an odd model is calculated; “Defuzzyfication”, where using the center of gravity method, the only quantitative value of the output variable characterizing a person's susceptibility to panic situations is determined Since the real quantitative values for linguistic variables mental properties of a person are unknown, then to assess the quality of the developed model, without endangering people, it is not possible. Therefore, the quality of the results of fuzzy modeling was estimated by the calculated value of the determination coefficient R2, which showed that the developed fuzzy model belongs to the category of good quality models $(R^2 = 0.93)$, which confirms the legitimacy of the assumptions made during her development. In accordance with to the results of the simulation, human susceptibility to panic situations for sanguinics and cholerics can be attributed to “increased” (0.88), and for phlegmatics and melancholics — to “moderate” (0.38). This means that cholerics and sanguinics can become epicenters of panic and the initiators of stampede, and phlegmatics and melancholics — obstacles to evacuation routes. What should be taken into account when developing effective evacuation measures, the main task of which is to quickly and safely evacuate people from adverse conditions. In the approved methods, the calculation of normative values of safety parameters is based on simplified analytical models of human flow movement, because a large number of factors have to be taken into account, some of which are quantitatively uncertain. The obtained result in the form of quantitative estimates of a person's susceptibility to panic situations will increase the accuracy of calculations.

  10. Zavodskikh R.K., Efanov N.N.
    Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224

    The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"