Результаты поиска по 'measure':
Найдено статей: 121
  1. Favorskaya A.V.
    Investigation the material properties of a plate by laser ultrasound using the analysis of multiple waves
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 653-673

    Ultrasound examination of material properties is a precision method for determining their elastic and strength properties in connection with the small wavelength formed in the material after impact of a laser beam. In this paper, the wave processes arising during these measurements are considered in detail. It is shown that full-wave numerical modeling allows us to study in detail the types of waves, topological characteristics of their profile, speed of arrival of waves at various points, identification the types of waves whose measurements are most optimal for examining a sample made of a specific material of a particular shape, and to develop measurement procedures.

    To carry out full-wave modeling, a grid-characteristic method on structured grids was used in this work and a hyperbolic system of equations that describes the propagation of elastic waves in the material of the thin plate under consideration on a specific example of a ratio of thickness to width of 1:10 was solved.

    To simulate an elastic front that arose in the plate due to a laser beam, a model of the corresponding initial conditions was proposed. A comparison of the wave effects that arise during its use in the case of a point source and with the data of physical experiments on the propagation of laser ultrasound in metal plates was made.

    A study was made on the basis of which the characteristic topological features of the wave processes under consideration were identified and revealed. The main types of elastic waves arising due to a laser beam are investigated, the possibility of their use for studying the properties of materials is analyzed. A method based on the analysis of multiple waves is proposed. The proposed method for studying the properties of a plate with the help of multiple waves on synthetic data was tested, and it showed good results.

    It should be noted that most of the studies of multiple waves are aimed at developing methods for their suppression. Multiple waves are not used to process the results of ultrasound studies due to the complexity of their detection in the recorded data of a physical experiment.

    Due to the use of full wave modeling and analysis of spatial dynamic wave processes, multiple waves are considered in detail in this work and it is proposed to divide materials into three classes, which allows using multiple waves to obtain information about the material of the plate.

    The main results of the work are the developed problem statements for the numerical simulation of the study of plates of a finite thickness by laser ultrasound; the revealed features of the wave phenomena arising in plates of a finite thickness; the developed method for studying the properties of the plate on the basis of multiple waves; the developed classification of materials.

    The results of the studies presented in this paper may be of interest not only for developments in the field of ultrasonic non-destructive testing, but also in the field of seismic exploration of the earth's interior, since the proposed approach can be extended to more complex cases of heterogeneous media and applied in geophysics.

    Views (last year): 3.
  2. Malovichko M.S., Petrov I.B.
    On numerical solution of joint inverse geophysical problems with structural constraints
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 329-343

    Inverse geophysical problems are difficult to solve due to their mathematically incorrect formulation and large computational complexity. Geophysical exploration in frontier areas is even more complicated due to the lack of reliable geological information. In this case, inversion methods that allow interpretation of several types of geophysical data together are recognized to be of major importance. This paper is dedicated to one of such inversion methods, which is based on minimization of the determinant of the Gram matrix for a set of model vectors. Within the framework of this approach, we minimize a nonlinear functional, which consists of squared norms of data residual of different types, the sum of stabilizing functionals and a term that measures the structural similarity between different model vectors. We apply this approach to seismic and electromagnetic synthetic data set. Specifically, we study joint inversion of acoustic pressure response together with controlled-source electrical field imposing structural constraints on resulting electrical conductivity and P-wave velocity distributions.

    We start off this note with the problem formulation and present the numerical method for inverse problem. We implemented the conjugate-gradient algorithm for non-linear optimization. The efficiency of our approach is demonstrated in numerical experiments, in which the true 3D electrical conductivity model was assumed to be known, but the velocity model was constructed during inversion of seismic data. The true velocity model was based on a simplified geology structure of a marine prospect. Synthetic seismic data was used as an input for our minimization algorithm. The resulting velocity model not only fit to the data but also has structural similarity with the given conductivity model. Our tests have shown that optimally chosen weight of the Gramian term may improve resolution of the final models considerably.

  3. Grachev V.A., Nayshtut Yu.S.
    Buckling prediction for shallow convex shells based on the analysis of nonlinear oscillations
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1189-1205

    Buckling problems of thin elastic shells have become relevant again because of the discrepancies between the standards in many countries on how to estimate loads causing buckling of shallow shells and the results of the experiments on thinwalled aviation structures made of high-strength alloys. The main contradiction is as follows: the ultimate internal stresses at shell buckling (collapsing) turn out to be lower than the ones predicted by the adopted design theory used in the USA and European standards. The current regulations are based on the static theory of shallow shells that was put forward in the 1930s: within the nonlinear theory of elasticity for thin-walled structures there are stable solutions that significantly differ from the forms of equilibrium typical to small initial loads. The minimum load (the lowest critical load) when there is an alternative form of equilibrium was used as a maximum permissible one. In the 1970s it was recognized that this approach is unacceptable for complex loadings. Such cases were not practically relevant in the past while now they occur with thinner structures used under complex conditions. Therefore, the initial theory on bearing capacity assessments needs to be revised. The recent mathematical results that proved asymptotic proximity of the estimates based on two analyses (the three-dimensional dynamic theory of elasticity and the dynamic theory of shallow convex shells) could be used as a theory basis. This paper starts with the setting of the dynamic theory of shallow shells that comes down to one resolving integrodifferential equation (once the special Green function is constructed). It is shown that the obtained nonlinear equation allows for separation of variables and has numerous time-period solutions that meet the Duffing equation with “a soft spring”. This equation has been thoroughly studied; its numerical analysis enables finding an amplitude and an oscillation period depending on the properties of the Green function. If the shell is oscillated with the trial time-harmonic load, the movement of the surface points could be measured at the maximum amplitude. The study proposes an experimental set-up where resonance oscillations are generated with the trial load normal to the surface. The experimental measurements of the shell movements, the amplitude and the oscillation period make it possible to estimate the safety factor of the structure bearing capacity with non-destructive methods under operating conditions.

  4. Doludenko A.N., Kulikov Y.M., Saveliev A.S.
    Сhaotic flow evolution arising in a body force field
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 883-912

    This article presents the results of an analytical and computer study of the chaotic evolution of a regular velocity field generated by a large-scale harmonic forcing. The authors obtained an analytical solution for the flow stream function and its derivative quantities (velocity, vorticity, kinetic energy, enstrophy and palinstrophy). Numerical modeling of the flow evolution was carried out using the OpenFOAM software package based on incompressible model, as well as two inhouse implementations of CABARET and McCormack methods employing nearly incompressible formulation. Calculations were carried out on a sequence of nested meshes with 642, 1282, 2562, 5122, 10242 cells for two characteristic (asymptotic) Reynolds numbers characterizing laminar and turbulent evolution of the flow, respectively. Simulations show that blow-up of the analytical solution takes place in both cases. The energy characteristics of the flow are discussed relying upon the energy curves as well as the dissipation rates. For the fine mesh, this quantity turns out to be several orders of magnitude less than its hydrodynamic (viscous) counterpart. Destruction of the regular flow structure is observed for any of the numerical methods, including at the late stages of laminar evolution, when numerically obtained distributions are close to analytics. It can be assumed that the prerequisite for the development of instability is the error accumulated during the calculation process. This error leads to unevenness in the distribution of vorticity and, as a consequence, to the variance vortex intensity and finally leads to chaotization of the flow. To study the processes of vorticity production, we used two integral vorticity-based quantities — integral enstrophy ($\zeta$) and palinstrophy $(P)$. The formulation of the problem with periodic boundary conditions allows us to establish a simple connection between these quantities. In addition, $\zeta$ can act as a measure of the eddy resolution of the numerical method, and palinstrophy determines the degree of production of small-scale vorticity.

  5. Solbakov V.V., Zatsepa S.N., Ivchenko A.A.
    A mathematical model for estimating the zone of intense evaporation of gas condensate during emissions from shallow wells
    Computer Research and Modeling, 2025, v. 17, no. 2, pp. 243-259

    Safe carrying out of emergency recovery operations at emergency offshore gas condensate wells is possible when taking into account the hazardous factors that prevent anti-fontanning measures. One of such factors is the gassiness of the operation zone due to the release from the water column of a large amount of light, as compared to air, natural gas, as well as vapours of heavier components of gas condensate. To estimate the distribution of explosive concentration of petroleum product vapours in the near surface layer of the atmosphere, it is necessary to determine the characteristics of the source of the contamination. Based on the analysis of theoretical works concerning to the formation of the velocity field in the upper layer of the sea as a result of large amounts of gas coming to the surface, an analytical model is proposed to calculate the size of the area in which a significant amount of gas condensate coming to the surface is vaporised during accidents at shallow-water wells. The stationary regime of reservoir fluid flow during fountaining of offshore gas and oil wells with an underwater location of their mouths is considered. A low-parametric model of oil product evaporation from films of different thickness is constructed. It is shown that the size of the zone of intensive evaporation at shallow-water wells is determined by the volume flow of liquid fraction, its fractional composition and selected threshold for estimation of oil product vapour flow into the atmosphere. In the context of this work shallow water wells are wells with gas flow rate from 1 to 20 million cubic meters at sea depths of about 50–200 metres. In this case, the formation fluid jet from the wellhead on the seabed is transformed into a bubble plume, the stratification of the water column, typical for the summer-autumn period, does not limit the plume’s exit to the sea surface, and the velocity of bubble rise allows the gas dissolution process to be disregardded. The analysis was limited to almost calm hydrometeorological conditions. Such conditions are favourable for offshore operations, but unfavourable from the point of view of dispersion of high concentrations of oil product vapours in the near surface layer of the atmosphere. As a result of this work, an analytical dependence for an approximate assessment of the zone of intensive evaporation of gas condensate is proposed.

  6. Chernavskaya O.D.
    Dynamical theory of information as a basis for natural-constructive approach to modeling a cognitive process
    Computer Research and Modeling, 2017, v. 9, no. 3, pp. 433-447

    The main statements and inferences of the Dynamic Theory Information (DTI) are considered. It is shown that DTI provides the possibility two reveal two essentially important types of information: objective (unconventional) and subjective (conventional) informtion. There are two ways of obtaining information: reception (perception of an already existing one) and generation (production of new) information. It is shown that the processes of generation and perception of information should proceed in two different subsystems of the same cognitive system. The main points of the Natural-Constructivist Approach to modeling the cognitive process are discussed. It is shown that any neuromorphic approach faces the problem of Explanatory Gap between the “Brain” and the “Mind”, i. e. the gap between objectively measurable information about the ensemble of neurons (“Brain”) and subjective information about the human consciousness (“Mind”). The Natural-Constructive Cognitive Architecture developed within the framework of this approach is discussed. It is a complex block-hierarchical combination of several neuroprocessors. The main constructive feature of this architecture is splitting the whole system into two linked subsystems, by analogy with the hemispheres of the human brain. One of the subsystems is processing the new information, learning, and creativity, i.e. for the generation of information. Another subsystem is responsible for processing already existing information, i.e. reception of information. It is shown that the lowest (zero) level of the hierarchy is represented by processors that should record images of real objects (distributed memory) as a response to sensory signals, which is objective information (and refers to the “Brain”). The next hierarchy levels are represented by processors containing symbols of the recorded images. It is shown that symbols represent subjective (conventional) information created by the system itself and providing its individuality. The highest hierarchy levels containing the symbols of abstract concepts provide the possibility to interpret the concepts of “consciousness”, “sub-consciousness”, “intuition”, referring to the field of “Mind”, in terms of the ensemble of neurons. Thus, DTI provides an opportunity to build a model that allows us to trace how the “Mind” could emerge basing on the “Brain”.

    Views (last year): 6.
  7. We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.

    Views (last year): 14. Citations: 1 (RSCI).
  8. Aleshin I.M., Malygin I.V.
    Machine learning interpretation of inter-well radiowave survey data
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 675-684

    Traditional geological search methods going to be ineffective. The exploration depth of kimberlite bodies and ore deposits has increased significantly. The only direct exploration method is to drill a system of wells to the depths that provide access to the enclosing rocks. Due to the high cost of drilling, the role of inter-well survey methods has increased. They allows to increase the mean well spacing without significantly reducing the kimberlite or ore body missing probability. The method of inter-well radio wave survey is effective to search for high contrast conductivity objects. The physics of the method based on the dependence of the electromagnetic wave propagation on the propagation medium conductivity. The source and receiver of electromagnetic radiation is an electric dipole, they are placed in adjacent wells. The distance between the source and receiver is known. Therefore we could estimate the medium absorption coefficient by the rate of radio wave amplitude decrease. Low electrical resistance rocks corresponds to high absorption of radio waves. The inter-well measurement data allows to estimate an effective electrical resistance (or conductivity) of the rock. Typically, the source and receiver are immersed in adjacent wells synchronously. The value of the of the electric field amplitude measured at the receiver site allows to estimate the average value of the attenuation coefficient on the line connecting the source and receiver. The measurements are taken during stops, approximately every 5 m. The distance between stops is much less than the distance between adjacent wells. This leads to significant spatial anisotropy in the measured data distribution. Drill grid covers a large area, and our point is to build a three-dimensional model of the distribution of the electrical properties of the inter-well space throughout the whole area. The anisotropy of spatial distribution makes hard to the use of standard geostatistics approach. To build a three-dimensional model of attenuation coefficient, we used one of machine learning theory methods, the method of nearest neighbors. In this method, the value of the absorption coefficient at a given point is calculated by $k$ nearest measurements. The number $k$ should be determined from additional reasons. The spatial distribution anisotropy effect can be reduced by changing the spatial scale in the horizontal direction. The scale factor $\lambda$ is one yet external parameter of the problem. To select the parameters $k$ and $\lambda$ values we used the determination coefficient. To demonstrate the absorption coefficient three-dimensional image construction we apply the procedure to the inter-well radio wave survey data. The data was obtained at one of the sites in Yakutia.

    Views (last year): 3.
  9. Bogomolov S.V.
    Stochastic formalization of the gas dynamic hierarchy
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 767-779

    Mathematical models of gas dynamics and its computational industry, in our opinion, are far from perfect. We will look at this problem from the point of view of a clear probabilistic micro-model of a gas from hard spheres, relying on both the theory of random processes and the classical kinetic theory in terms of densities of distribution functions in phase space, namely, we will first construct a system of nonlinear stochastic differential equations (SDE), and then a generalized random and nonrandom integro-differential Boltzmann equation taking into account correlations and fluctuations. The key feature of the initial model is the random nature of the intensity of the jump measure and its dependence on the process itself.

    Briefly recall the transition to increasingly coarse meso-macro approximations in accordance with a decrease in the dimensionalization parameter, the Knudsen number. We obtain stochastic and non-random equations, first in phase space (meso-model in terms of the Wiener — measure SDE and the Kolmogorov – Fokker – Planck equations), and then — in coordinate space (macro-equations that differ from the Navier – Stokes system of equations and quasi-gas dynamics systems). The main difference of this derivation is a more accurate averaging by velocity due to the analytical solution of stochastic differential equations with respect to the Wiener measure, in the form of which an intermediate meso-model in phase space is presented. This approach differs significantly from the traditional one, which uses not the random process itself, but its distribution function. The emphasis is placed on the transparency of assumptions during the transition from one level of detail to another, and not on numerical experiments, which contain additional approximation errors.

    The theoretical power of the microscopic representation of macroscopic phenomena is also important as an ideological support for particle methods alternative to difference and finite element methods.

  10. Podlipnova I.V., Persiianov M.I., Shvetsov V.I., Gasnikova E.V.
    Transport modeling: averaging price matrices
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 317-327

    This paper considers various approaches to averaging the generalized travel costs calculated for different modes of travel in the transportation network. The mode of transportation is understood to mean both the mode of transport, for example, a car or public transport, and movement without the use of transport, for example, on foot. The task of calculating the trip matrices includes the task of calculating the total matrices, in other words, estimating the total demand for movements by all modes, as well as the task of splitting the matrices according to the mode, also called modal splitting. To calculate trip matrices, gravitational, entropy and other models are used, in which the probability of movement between zones is estimated based on a certain measure of the distance of these zones from each other. Usually, the generalized cost of moving along the optimal path between zones is used as a distance measure. However, the generalized cost of movement differs for different modes of movement. When calculating the total trip matrices, it becomes necessary to average the generalized costs by modes of movement. The averaging procedure is subject to the natural requirement of monotonicity in all arguments. This requirement is not met by some commonly used averaging methods, for example, averaging with weights. The problem of modal splitting is solved by applying the methods of discrete choice theory. In particular, within the framework of the theory of discrete choice, correct methods have been developed for averaging the utility of alternatives that are monotonic in all arguments. The authors propose some adaptation of the methods of the theory of discrete choice for application to the calculation of the average cost of movements in the gravitational and entropy models. The transfer of averaging formulas from the context of the modal splitting model to the trip matrix calculation model requires the introduction of new parameters and the derivation of conditions for the possible value of these parameters, which was done in this article. The issues of recalibration of the gravitational function, which is necessary when switching to a new averaging method, if the existing function is calibrated taking into account the use of the weighted average cost, were also considered. The proposed methods were implemented on the example of a small fragment of the transport network. The results of calculations are presented, demonstrating the advantage of the proposed methods.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"