Результаты поиска по 'point-to-point estimates':
Найдено статей: 32
  1. Stonyakin F.S., Stepanov A.N., Gasnikov A.V., Titov A.A.
    Mirror descent for constrained optimization problems with large subgradient values of functional constraints
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 301-317

    The paper is devoted to the problem of minimization of the non-smooth functional $f$ with a non-positive non-smooth Lipschitz-continuous functional constraint. We consider the formulation of the problem in the case of quasi-convex functionals. We propose new strategies of step-sizes and adaptive stopping rules in Mirror Descent for the considered class of problems. It is shown that the methods are applicable to the objective functionals of various levels of smoothness. Applying a special restart technique to the considered version of Mirror Descent there was proposed an optimal method for optimization problems with strongly convex objective functionals. Estimates of the rate of convergence for the considered methods are obtained depending on the level of smoothness of the objective functional. These estimates indicate the optimality of the considered methods from the point of view of the theory of lower oracle bounds. In particular, the optimality of our approach for Höldercontinuous quasi-convex (sub)differentiable objective functionals is proved. In addition, the case of a quasiconvex objective functional and functional constraint was considered. In this paper, we consider the problem of minimizing a non-smooth functional $f$ in the presence of a Lipschitz-continuous non-positive non-smooth functional constraint $g$, and the problem statement in the cases of quasi-convex and strongly (quasi-)convex functionals is considered separately. The paper presents numerical experiments demonstrating the advantages of using the considered methods.

  2. Meleshko E.V., Afanasenko T.S., Gadzhimirzayev Sh.M., Pashkov R.A., Gilya-Zetinov A.A., Tsybulko E.A., Zaitseva A.S., Khelvas A.V.
    Discrete simulation of the road restoration process
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1255-1268

    This work contains a description of the results of modeling the process of maintaining the readiness of a section of the road network under strikes of with specified parameters. A one-dimensional section of road up to 40 km long with a total number of strikes up to 100 during the work of the brigade is considered. A simulation model has been developed for carrying out work to maintain it in working condition by several groups (engineering teams) that are part of the engineering and road division. A multicopter-type unmanned aerial vehicle is used to search for the points of appearance of obstacles. Life cycle schemes of the main participants of the tactical scene have been developed and an event-driven model of the tactical scene has been built. The format of the event log generated as a result of simulation modeling of the process of maintaining a road section is proposed. To visualize the process of maintaining the readiness of a road section, it is proposed to use visualization in the cyclogram format.

    An XSL style has been developed for building a cyclogram based on an event log. As an algorithm for making a decision on the assignment of barriers to brigades, the simplest algorithm has been adopted, prescribing choosing the nearest barrier. A criterion describing the effectiveness of maintenance work on the site based on the assessment of the average speed of vehicles on the road section is proposed. Graphs of the dependence of the criterion value and the root-meansquare error depending on the length of the maintained section are plotted and an estimate is obtained for the maximum length of the road section maintained in a state of readiness with specified values for the selected quality indicator with specified characteristics of striking and performance of repair crews. The expediency of carrying out work to maintain readiness by several brigades that are part of the engineering and road division operating autonomously is shown.

    The influence of the speed of the unmanned aerial vehicle on the ability to maintain the readiness of the road section is analyzed. The speed range for from 10 to 70 km/h is considered, which corresponds to the technical capabilities of multicoptertype reconnaissance unmanned aerial vehicles. The simulation results can be used as part of a complex simulation model of an army offensive or defensive operation and for solving the problem of optimizing the assignment of tasks to maintain the readiness of road sections to engineering and road brigades. The proposed approach may be of interest for the development of military-oriented strategy games.

  3. Chubatov A.A., Karmazin V.N.
    The stable estimation of intensity of atmospheric pollution source on the base of sequential function specification method
    Computer Research and Modeling, 2009, v. 1, no. 4, pp. 391-403

    The approach given in this work helps to organize the operative control over action intensity of pollution emissions in atmosphere. The approach allows to sequential estimate of unknown intensity of atmospheric pollution source on the base of concentration measurements of impurity in several stationary control points is offered in the work. The inverse problem was solved by means of the step-by-step regularization and the sequential function specification method. The solution is presented in the form of the digital filter in terms of Hamming. The fitting algorithm of regularization parameter r for function specification method is described.

    Views (last year): 2.
  4. Kotliarova E.V., Gasnikov A.V., Gasnikova E.V., Yarmoshik D.V.
    Finding equilibrium in two-stage traffic assignment model
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 365-379

    Authors describe a two-stage traffic assignment model. It contains of two blocks. The first block consists of a model for calculating a correspondence (demand) matrix, whereas the second block is a traffic assignment model. The first model calculates a matrix of correspondences using a matrix of transport costs (it characterizes the required volumes of movement from one area to another, it is time in this case). To solve this problem, authors propose to use one of the most popular methods of calculating the correspondence matrix in urban studies — the entropy model. The second model describes exactly how the needs for displacement specified by the correspondence matrix are distributed along the possible paths. Knowing the ways of the flows distribution along the paths, it is possible to calculate the cost matrix. Equilibrium in a two-stage model is a fixed point in the sequence of these two models. In practice the problem of finding a fixed point can be solved by the fixed-point iteration method. Unfortunately, at the moment the issue of convergence and estimations of the convergence rate for this method has not been studied quite thoroughly. In addition, the numerical implementation of the algorithm results in many problems. In particular, if the starting point is incorrect, situations may arise where the algorithm requires extremely large numbers to be computed and exceeds the available memory even on the most modern computers. Therefore the article proposes a method for reducing the problem of finding the equilibrium to the problem of the convex non-smooth optimization. Also a numerical method for solving the obtained optimization problem is proposed. Numerical experiments were carried out for both methods of solving the problem. The authors used data for Vladivostok (for this city information from various sources was processed and collected in a new dataset) and two smaller cities in the USA. It was not possible to achieve convergence by the method of fixed-point iteration, whereas the second model for the same dataset demonstrated convergence rate $k^{-1.67}$.

  5. Grachev V.A., Nayshtut Yu.S.
    Buckling prediction for shallow convex shells based on the analysis of nonlinear oscillations
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1189-1205

    Buckling problems of thin elastic shells have become relevant again because of the discrepancies between the standards in many countries on how to estimate loads causing buckling of shallow shells and the results of the experiments on thinwalled aviation structures made of high-strength alloys. The main contradiction is as follows: the ultimate internal stresses at shell buckling (collapsing) turn out to be lower than the ones predicted by the adopted design theory used in the USA and European standards. The current regulations are based on the static theory of shallow shells that was put forward in the 1930s: within the nonlinear theory of elasticity for thin-walled structures there are stable solutions that significantly differ from the forms of equilibrium typical to small initial loads. The minimum load (the lowest critical load) when there is an alternative form of equilibrium was used as a maximum permissible one. In the 1970s it was recognized that this approach is unacceptable for complex loadings. Such cases were not practically relevant in the past while now they occur with thinner structures used under complex conditions. Therefore, the initial theory on bearing capacity assessments needs to be revised. The recent mathematical results that proved asymptotic proximity of the estimates based on two analyses (the three-dimensional dynamic theory of elasticity and the dynamic theory of shallow convex shells) could be used as a theory basis. This paper starts with the setting of the dynamic theory of shallow shells that comes down to one resolving integrodifferential equation (once the special Green function is constructed). It is shown that the obtained nonlinear equation allows for separation of variables and has numerous time-period solutions that meet the Duffing equation with “a soft spring”. This equation has been thoroughly studied; its numerical analysis enables finding an amplitude and an oscillation period depending on the properties of the Green function. If the shell is oscillated with the trial time-harmonic load, the movement of the surface points could be measured at the maximum amplitude. The study proposes an experimental set-up where resonance oscillations are generated with the trial load normal to the surface. The experimental measurements of the shell movements, the amplitude and the oscillation period make it possible to estimate the safety factor of the structure bearing capacity with non-destructive methods under operating conditions.

  6. Golubev V.I., Khokhlov N.I.
    Estimation of anisotropy of seismic response from fractured geological objects
    Computer Research and Modeling, 2018, v. 10, no. 2, pp. 231-240

    Seismic survey process is the common method of prospecting and exploration of deposits: oil and natural gas. Invented at the beginning of the XX century, it has received significant development and is currently used by almost all service oil companies. Its main advantages are the acceptable cost of fieldwork (in comparison with drilling wells) and the accuracy of estimating the characteristics of the subsurface area. However, with the discovery of non-traditional deposits (for example, the Arctic shelf, the Bazhenov Formation), the task of improving existing and creating new seismic data processing technologies became important. Significant development in this direction is possible with the use of numerical simulation of the propagation of seismic waves in realistic models of the geological medium, since it is possible to specify an arbitrary internal structure of the medium with subsequent evaluation of the synthetic signal-response.

    The present work is devoted to the study of spatial dynamic processes occurring in geological medium containing fractured inclusions in the process of seismic exploration. The authors constructed a three-dimensional model of a layered massif containing a layer of fluid-saturated cracks, which makes it possible to estimate the signal-response when the structure of the inhomogeneous inclusion is varied. To describe physical processes, we use a system of equations for a linearly elastic body in partial derivatives of the second order, which is solved numerically by a grid-characteristic method on hexahedral grid. In this case, the crack planes are identified at the stage of constructing the grid, and further an additional correction is used to ensure a correct seismic response for the model parameters typical for geological media.

    In the paper, three-component area seismograms with a common explosion point were obtained. On their basis, the effect of the structure of a fractured medium on the anisotropy of the seismic response recorded on the day surface at a different distance from the source was estimated. It is established that the kinematic characteristics of the signal remain constant, while the dynamic characteristics for ordered and disordered models can differ by tens of percents.

    Views (last year): 11. Citations: 4 (RSCI).
  7. Aleshin I.M., Malygin I.V.
    Machine learning interpretation of inter-well radiowave survey data
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 675-684

    Traditional geological search methods going to be ineffective. The exploration depth of kimberlite bodies and ore deposits has increased significantly. The only direct exploration method is to drill a system of wells to the depths that provide access to the enclosing rocks. Due to the high cost of drilling, the role of inter-well survey methods has increased. They allows to increase the mean well spacing without significantly reducing the kimberlite or ore body missing probability. The method of inter-well radio wave survey is effective to search for high contrast conductivity objects. The physics of the method based on the dependence of the electromagnetic wave propagation on the propagation medium conductivity. The source and receiver of electromagnetic radiation is an electric dipole, they are placed in adjacent wells. The distance between the source and receiver is known. Therefore we could estimate the medium absorption coefficient by the rate of radio wave amplitude decrease. Low electrical resistance rocks corresponds to high absorption of radio waves. The inter-well measurement data allows to estimate an effective electrical resistance (or conductivity) of the rock. Typically, the source and receiver are immersed in adjacent wells synchronously. The value of the of the electric field amplitude measured at the receiver site allows to estimate the average value of the attenuation coefficient on the line connecting the source and receiver. The measurements are taken during stops, approximately every 5 m. The distance between stops is much less than the distance between adjacent wells. This leads to significant spatial anisotropy in the measured data distribution. Drill grid covers a large area, and our point is to build a three-dimensional model of the distribution of the electrical properties of the inter-well space throughout the whole area. The anisotropy of spatial distribution makes hard to the use of standard geostatistics approach. To build a three-dimensional model of attenuation coefficient, we used one of machine learning theory methods, the method of nearest neighbors. In this method, the value of the absorption coefficient at a given point is calculated by $k$ nearest measurements. The number $k$ should be determined from additional reasons. The spatial distribution anisotropy effect can be reduced by changing the spatial scale in the horizontal direction. The scale factor $\lambda$ is one yet external parameter of the problem. To select the parameters $k$ and $\lambda$ values we used the determination coefficient. To demonstrate the absorption coefficient three-dimensional image construction we apply the procedure to the inter-well radio wave survey data. The data was obtained at one of the sites in Yakutia.

    Views (last year): 3.
  8. In this work we have developed a new efficient program for the numerical simulation of 3D global chemical transport on an adaptive finite-difference grid which allows us to concentrate grid points in the regions where flow variables sharply change and coarsen the grid in the regions of their smooth behavior, which significantly minimizes the grid size. We represent the adaptive grid with a combination of several dynamic (tree, linked list) and static (array) data structures. The dynamic data structures are used for a grid reconstruction, and the calculations of the flow variables are based on the static data structures. The introduction of the static data structures allows us to speed up the program by a factor of 2 in comparison with the conventional approach to the grid representation with only dynamic data structures.

    We wrote and tested our program on a computer with 6 CPU cores. Using the computer microarchitecture simulator gem5, we estimated the scalability property of the program on a significantly greater number of cores (up to 32), using several models of a computer system with the design “computational cores – cache – main memory”. It has been shown that the microarchitecture of a computer system has a significant impact on the scalability property, i.e. the same program demonstrates different efficiency on different computer microarchitectures. For example, we have a speedup of 14.2 on a processor with 32 cores and 2 cache levels, but we have a speedup of 22.2 on a processor with 32 cores and 3 cache levels. The execution time of a program on a computer model in gem5 is 104–105 times greater than the execution time of the same program on a real computer and equals 1.5 hours for the most complex model.

    Also in this work we describe how to configure gem5 and how to perform simulations with gem5 in the most optimal way.

  9. Abakumov A.I., Izrailsky Y.G.
    Models of phytoplankton distribution over chlorophyll in various habitat conditions. Estimation of aquatic ecosystem bioproductivity
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1177-1190

    A model of the phytoplankton abundance dynamics depending on changes in the content of chlorophyll in phytoplankton under the influence of changing environmental conditions is proposed. The model takes into account the dependence of biomass growth on environmental conditions, as well as on photosynthetic chlorophyll activity. The light and dark stages of photosynthesis have been identified. The processes of chlorophyll consumption during photosynthesis in the light and the growth of chlorophyll mass together with phytoplankton biomass are described. The model takes into account environmental conditions such as mineral nutrients, illumination and water temperature. The model is spatially distributed, the spatial variable corresponds to mass fraction of chlorophyll in phytoplankton. Thereby possible spreads of the chlorophyll contents in phytoplankton are taken into consideration. The model calculates the density distribution of phytoplankton by the proportion of chlorophyll in it. In addition, the rate of production of new phytoplankton biomass is calculated. In parallel, point analogs of the distributed model are considered. The diurnal and seasonal (during the year) dynamics of phytoplankton distribution by chlorophyll fraction are demonstrated. The characteristics of the rate of primary production in daily or seasonally changing environmental conditions are indicated. Model characteristics of the dynamics of phytoplankton biomass growth show that in the light this growth is about twice as large as in the dark. It shows, that illumination significantly affects the rate of production. Seasonal dynamics demonstrates an accelerated growth of biomass in spring and autumn. The spring maximum is associated with warming under the conditions of biogenic substances accumulated in winter, and the autumn, slightly smaller maximum, with the accumulation of nutrients during the summer decline in phytoplankton biomass. And the biomass in summer decreases, again due to a deficiency of nutrients. Thus, in the presence of light, mineral nutrition plays the main role in phytoplankton dynamics.

    In general, the model demonstrates the dynamics of phytoplankton biomass, qualitatively similar to classical concepts, under daily and seasonal changes in the environment. The model seems to be suitable for assessing the bioproductivity of aquatic ecosystems. It can be supplemented with equations and terms of equations for a more detailed description of complex processes of photosynthesis. The introduction of variables in the physical habitat space and the conjunction of the model with satellite information on the surface of the reservoir leads to model estimates of the bioproductivity of vast marine areas. Introduction of physical space variables habitat and the interface of the model with satellite information about the surface of the basin leads to model estimates of the bioproductivity of vast marine areas.

  10. Dvinskikh D.M., Pirau V.V., Gasnikov A.V.
    On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319

    In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.

    In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.

Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"