Результаты поиска по 'point estimation':
Найдено статей: 37
  1. Golubev V.I., Khokhlov N.I.
    Estimation of anisotropy of seismic response from fractured geological objects
    Computer Research and Modeling, 2018, v. 10, no. 2, pp. 231-240

    Seismic survey process is the common method of prospecting and exploration of deposits: oil and natural gas. Invented at the beginning of the XX century, it has received significant development and is currently used by almost all service oil companies. Its main advantages are the acceptable cost of fieldwork (in comparison with drilling wells) and the accuracy of estimating the characteristics of the subsurface area. However, with the discovery of non-traditional deposits (for example, the Arctic shelf, the Bazhenov Formation), the task of improving existing and creating new seismic data processing technologies became important. Significant development in this direction is possible with the use of numerical simulation of the propagation of seismic waves in realistic models of the geological medium, since it is possible to specify an arbitrary internal structure of the medium with subsequent evaluation of the synthetic signal-response.

    The present work is devoted to the study of spatial dynamic processes occurring in geological medium containing fractured inclusions in the process of seismic exploration. The authors constructed a three-dimensional model of a layered massif containing a layer of fluid-saturated cracks, which makes it possible to estimate the signal-response when the structure of the inhomogeneous inclusion is varied. To describe physical processes, we use a system of equations for a linearly elastic body in partial derivatives of the second order, which is solved numerically by a grid-characteristic method on hexahedral grid. In this case, the crack planes are identified at the stage of constructing the grid, and further an additional correction is used to ensure a correct seismic response for the model parameters typical for geological media.

    In the paper, three-component area seismograms with a common explosion point were obtained. On their basis, the effect of the structure of a fractured medium on the anisotropy of the seismic response recorded on the day surface at a different distance from the source was estimated. It is established that the kinematic characteristics of the signal remain constant, while the dynamic characteristics for ordered and disordered models can differ by tens of percents.

    Views (last year): 11. Citations: 4 (RSCI).
  2. Aleshin I.M., Malygin I.V.
    Machine learning interpretation of inter-well radiowave survey data
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 675-684

    Traditional geological search methods going to be ineffective. The exploration depth of kimberlite bodies and ore deposits has increased significantly. The only direct exploration method is to drill a system of wells to the depths that provide access to the enclosing rocks. Due to the high cost of drilling, the role of inter-well survey methods has increased. They allows to increase the mean well spacing without significantly reducing the kimberlite or ore body missing probability. The method of inter-well radio wave survey is effective to search for high contrast conductivity objects. The physics of the method based on the dependence of the electromagnetic wave propagation on the propagation medium conductivity. The source and receiver of electromagnetic radiation is an electric dipole, they are placed in adjacent wells. The distance between the source and receiver is known. Therefore we could estimate the medium absorption coefficient by the rate of radio wave amplitude decrease. Low electrical resistance rocks corresponds to high absorption of radio waves. The inter-well measurement data allows to estimate an effective electrical resistance (or conductivity) of the rock. Typically, the source and receiver are immersed in adjacent wells synchronously. The value of the of the electric field amplitude measured at the receiver site allows to estimate the average value of the attenuation coefficient on the line connecting the source and receiver. The measurements are taken during stops, approximately every 5 m. The distance between stops is much less than the distance between adjacent wells. This leads to significant spatial anisotropy in the measured data distribution. Drill grid covers a large area, and our point is to build a three-dimensional model of the distribution of the electrical properties of the inter-well space throughout the whole area. The anisotropy of spatial distribution makes hard to the use of standard geostatistics approach. To build a three-dimensional model of attenuation coefficient, we used one of machine learning theory methods, the method of nearest neighbors. In this method, the value of the absorption coefficient at a given point is calculated by $k$ nearest measurements. The number $k$ should be determined from additional reasons. The spatial distribution anisotropy effect can be reduced by changing the spatial scale in the horizontal direction. The scale factor $\lambda$ is one yet external parameter of the problem. To select the parameters $k$ and $\lambda$ values we used the determination coefficient. To demonstrate the absorption coefficient three-dimensional image construction we apply the procedure to the inter-well radio wave survey data. The data was obtained at one of the sites in Yakutia.

    Views (last year): 3.
  3. In this work we have developed a new efficient program for the numerical simulation of 3D global chemical transport on an adaptive finite-difference grid which allows us to concentrate grid points in the regions where flow variables sharply change and coarsen the grid in the regions of their smooth behavior, which significantly minimizes the grid size. We represent the adaptive grid with a combination of several dynamic (tree, linked list) and static (array) data structures. The dynamic data structures are used for a grid reconstruction, and the calculations of the flow variables are based on the static data structures. The introduction of the static data structures allows us to speed up the program by a factor of 2 in comparison with the conventional approach to the grid representation with only dynamic data structures.

    We wrote and tested our program on a computer with 6 CPU cores. Using the computer microarchitecture simulator gem5, we estimated the scalability property of the program on a significantly greater number of cores (up to 32), using several models of a computer system with the design “computational cores – cache – main memory”. It has been shown that the microarchitecture of a computer system has a significant impact on the scalability property, i.e. the same program demonstrates different efficiency on different computer microarchitectures. For example, we have a speedup of 14.2 on a processor with 32 cores and 2 cache levels, but we have a speedup of 22.2 on a processor with 32 cores and 3 cache levels. The execution time of a program on a computer model in gem5 is 104–105 times greater than the execution time of the same program on a real computer and equals 1.5 hours for the most complex model.

    Also in this work we describe how to configure gem5 and how to perform simulations with gem5 in the most optimal way.

  4. Abakumov A.I., Izrailsky Y.G.
    Models of phytoplankton distribution over chlorophyll in various habitat conditions. Estimation of aquatic ecosystem bioproductivity
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1177-1190

    A model of the phytoplankton abundance dynamics depending on changes in the content of chlorophyll in phytoplankton under the influence of changing environmental conditions is proposed. The model takes into account the dependence of biomass growth on environmental conditions, as well as on photosynthetic chlorophyll activity. The light and dark stages of photosynthesis have been identified. The processes of chlorophyll consumption during photosynthesis in the light and the growth of chlorophyll mass together with phytoplankton biomass are described. The model takes into account environmental conditions such as mineral nutrients, illumination and water temperature. The model is spatially distributed, the spatial variable corresponds to mass fraction of chlorophyll in phytoplankton. Thereby possible spreads of the chlorophyll contents in phytoplankton are taken into consideration. The model calculates the density distribution of phytoplankton by the proportion of chlorophyll in it. In addition, the rate of production of new phytoplankton biomass is calculated. In parallel, point analogs of the distributed model are considered. The diurnal and seasonal (during the year) dynamics of phytoplankton distribution by chlorophyll fraction are demonstrated. The characteristics of the rate of primary production in daily or seasonally changing environmental conditions are indicated. Model characteristics of the dynamics of phytoplankton biomass growth show that in the light this growth is about twice as large as in the dark. It shows, that illumination significantly affects the rate of production. Seasonal dynamics demonstrates an accelerated growth of biomass in spring and autumn. The spring maximum is associated with warming under the conditions of biogenic substances accumulated in winter, and the autumn, slightly smaller maximum, with the accumulation of nutrients during the summer decline in phytoplankton biomass. And the biomass in summer decreases, again due to a deficiency of nutrients. Thus, in the presence of light, mineral nutrition plays the main role in phytoplankton dynamics.

    In general, the model demonstrates the dynamics of phytoplankton biomass, qualitatively similar to classical concepts, under daily and seasonal changes in the environment. The model seems to be suitable for assessing the bioproductivity of aquatic ecosystems. It can be supplemented with equations and terms of equations for a more detailed description of complex processes of photosynthesis. The introduction of variables in the physical habitat space and the conjunction of the model with satellite information on the surface of the reservoir leads to model estimates of the bioproductivity of vast marine areas. Introduction of physical space variables habitat and the interface of the model with satellite information about the surface of the basin leads to model estimates of the bioproductivity of vast marine areas.

  5. Dvinskikh D.M., Pirau V.V., Gasnikov A.V.
    On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319

    In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.

    In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.

  6. Kondratyev M.A.
    Forecasting methods and models of disease spread
    Computer Research and Modeling, 2013, v. 5, no. 5, pp. 863-882

    The number of papers addressing the forecasting of the infectious disease morbidity is rapidly growing due to accumulation of available statistical data. This article surveys the major approaches for the shortterm and the long-term morbidity forecasting. Their limitations and the practical application possibilities are pointed out. The paper presents the conventional time series analysis methods — regression and autoregressive models; machine learning-based approaches — Bayesian networks and artificial neural networks; case-based reasoning; filtration-based techniques. The most known mathematical models of infectious diseases are mentioned: classical equation-based models (deterministic and stochastic), modern simulation models (network and agent-based).

    Views (last year): 71. Citations: 19 (RSCI).
  7. Aristov V.V., Ilyin O.V.
    Methods and problems in the kinetic approach for simulating biological structures
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 851-866

    The biological structure is considered as an open nonequilibrium system which properties can be described on the basis of kinetic equations. New problems with nonequilibrium boundary conditions are introduced. The nonequilibrium distribution tends gradually to an equilibrium state. The region of spatial inhomogeneity has a scale depending on the rate of mass transfer in the open system and the characteristic time of metabolism. In the proposed approximation, the internal energy of the motion of molecules is much less than the energy of translational motion. Or in other terms we can state that the kinetic energy of the average blood velocity is substantially higher than the energy of chaotic motion of the same particles. We state that the relaxation problem models a living system. The flow of entropy to the system decreases in downstream, this corresponds to Shrödinger’s general ideas that the living system “feeds on” negentropy. We introduce a quantity that determines the complexity of the biosystem, more precisely, this is the difference between the nonequilibrium kinetic entropy and the equilibrium entropy at each spatial point integrated over the entire spatial region. Solutions to the problems of spatial relaxation allow us to estimate the size of biosystems as regions of nonequilibrium. The results are compared with empirical data, in particular, for mammals we conclude that the larger the size of animals, the smaller the specific energy of metabolism. This feature is reproduced in our model since the span of the nonequilibrium region is larger in the system where the reaction rate is shorter, or in terms of the kinetic approach, the longer the relaxation time of the interaction between the molecules. The approach is also used for estimation of a part of a living system, namely a green leaf. The problems of aging as degradation of an open nonequilibrium system are considered. The analogy is related to the structure, namely, for a closed system, the equilibrium of the structure is attained for the same molecules while in the open system, a transition occurs to the equilibrium of different particles, which change due to metabolism. Two essentially different time scales are distinguished, the ratio of which is approximately constant for various animal species. Under the assumption of the existence of these two time scales the kinetic equation splits in two equations, describing the metabolic (stationary) and “degradative” (nonstationary) parts of the process.

    Views (last year): 31.
  8. Grachev V.A., Nayshtut Yu.S.
    Relaxation oscillations and buckling of thin shells
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 807-820

    The paper reviews possibilities to predict buckling of thin cylindrical shells with non-destructive techniques during operation. It studies shallow shells made of high strength materials. Such structures are known for surface displacements exceeding the thickness of the elements. In the explored shells relaxation oscillations of significant amplitude can be generated even under relatively low internal stresses. The problem of the cylindrical shell oscillation is mechanically and mathematically modeled in a simplified form by conversion into an ordinary differential equation. To create the model, the researches of many authors were used who studied the geometry of the surface formed after buckling (postbuckling behavior). The nonlinear ordinary differential equation for the oscillating shell matches the well-known Duffing equation. It is important that there is a small parameter before the second time derivative in the Duffing equation. The latter circumstance enables making a detailed analysis of the obtained equation and describing the physical phenomena — relaxation oscillations — that are unique to thin high-strength shells.

    It is shown that harmonic oscillations of the shell around the equilibrium position and stable relaxation oscillations are defined by the bifurcation point of the solutions to the Duffing equation. This is the first point in the Feigenbaum sequence to convert the stable periodic motions into dynamic chaos. The amplitude and the period of relaxation oscillations are calculated based on the physical properties and the level of internal stresses within the shell. Two cases of loading are reviewed: compression along generating elements and external pressure.

    It is highlighted that if external forces vary in time according to the harmonic law, the periodic oscillation of the shell (nonlinear resonance) is a combination of slow and stick-slip movements. Since the amplitude and the frequency of the oscillations are known, this fact enables proposing an experimental facility for prediction of the shell buckling with non-destructive techniques. The following requirement is set as a safety factor: maximum load combinations must not cause displacements exceeding specified limits. Based on the results of the experimental measurements a formula is obtained to estimate safety against buckling (safety factor) of the structure.

  9. Lopato A.I., Poroshyna Y.E., Utkin P.S.
    Numerical study of the mechanisms of propagation of pulsating gaseous detonation in a non-uniform medium
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1263-1282

    In the last few years, significant progress has been observed in the field of rotating detonation engines for aircrafts. Scientific laboratories around the world conduct both fundamental researches related, for example, to the issues of effective mixing of fuel and oxidizer with the separate supply, and applied development of existing prototypes. The paper provides a brief overview of the main results of the most significant recent computational work on the study of propagation of a onedimensional pulsating gaseous detonation wave in a non-uniform medium. The general trends observed by the authors of these works are noted. In these works, it is shown that the presence of parameter perturbations in front of the wave front can lead to regularization and to resonant amplification of pulsations behind the detonation wave front. Thus, there is an appealing opportunity from a practical point of view to influence the stability of the detonation wave and control it. The aim of the present work is to create an instrument to study the gas-dynamic mechanisms of these effects.

    The mathematical model is based on one-dimensional Euler equations supplemented by a one-stage model of the kinetics of chemical reactions. The defining system of equations is written in the shock-attached frame that leads to the need to add a shock-change equations. A method for integrating this equation is proposed, taking into account the change in the density of the medium in front of the wave front. So, the numerical algorithm for the simulation of detonation wave propagation in a non-uniform medium is proposed.

    Using the developed algorithm, a numerical study of the propagation of stable detonation in a medium with variable density as carried out. A mode with a relatively small oscillation amplitude is investigated, in which the fluctuations of the parameters behind the detonation wave front occur with the frequency of fluctuations in the density of the medium. It is shown the relationship of the oscillation period with the passage time of the characteristics C+ and C0 over the region, which can be conditionally considered an induction zone. The phase shift between the oscillations of the velocity of the detonation wave and the density of the gas before the wave is estimated as the maximum time of passage of the characteristic C+ through the induction zone.

  10. Puchinin S.M., Korolkov E.R., Stonyakin F.S., Alkousa M.S., Vyguzov A.A.
    Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 105-122

    In this paper, we consider two variants of the concept of sharp minimum for mathematical programming problems with quasiconvex objective function and inequality constraints. It investigated the problem of describing a variant of a simple subgradient method with switching along productive and non-productive steps, for which, on a class of problems with Lipschitz functions, it would be possible to guarantee convergence with the rate of geometric progression to the set of exact solutions or its vicinity. It is important that to implement the proposed method there is no need to know the sharp minimum parameter, which is usually difficult to estimate in practice. To overcome this problem, the authors propose to use a step adjustment procedure similar to that previously proposed by B. T. Polyak. However, in this case, in comparison with the class of problems without constraints, it arises the problem of knowing the exact minimal value of the objective function. The paper describes the conditions for the inexactness of this information, which make it possible to preserve convergence with the rate of geometric progression in the vicinity of the set of minimum points of the problem. Two analogs of the concept of a sharp minimum for problems with inequality constraints are considered. In the first one, the problem of approximation to the exact solution arises only to a pre-selected level of accuracy, for this, it is considered the case when the minimal value of the objective function is unknown; instead, it is given some approximation of this value. We describe conditions on the inexact minimal value of the objective function, under which convergence to the vicinity of the desired set of points with a rate of geometric progression is still preserved. The second considered variant of the sharp minimum does not depend on the desired accuracy of the problem. For this, we propose a slightly different way of checking whether the step is productive, which allows us to guarantee the convergence of the method to the exact solution with the rate of geometric progression in the case of exact information. Convergence estimates are proved under conditions of weak convexity of the constraints and some restrictions on the choice of the initial point, and a corollary is formulated for the convex case when the need for an additional assumption on the choice of the initial point disappears. For both approaches, it has been proven that the distance from the current point to the set of solutions decreases with increasing number of iterations. This, in particular, makes it possible to limit the requirements for the properties of the used functions (Lipschitz-continuous, sharp minimum) only for a bounded set. Some computational experiments are performed, including for the truss topology design problem.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"