Результаты поиска по 'complexity estimate':
Найдено статей: 55
  1. Volokhova A.V., Zemlyanay E.V., Lakhno V.D., Amirkhanov I.V., Puzynin I.V., Puzynina T.P.
    Numerical investigation of photoexcited polaron states in water
    Computer Research and Modeling, 2014, v. 6, no. 2, pp. 253-261

    A method and a complex of computer programs are developed for the numerical simulation of the polaron states excitation process in condensed media. A numerical study of the polaron states formation in water under the action of the ultraviolet range laser irradiation is carried out. Our approach allows to reproduce the experimental data of the hydrated electrons formation. A numerical scheme is presented for the solution of the respective system of nonlinear partial differential equations. Parallel implementation is based on the MPI technique. The numerical results are given in comparison with the experimental data and theoretical estimations.

    Citations: 1 (RSCI).
  2. Khavinson M.J., Kulakov M.P., Frisman Y.Y.
    Mathematical modeling of the age groups of employed peoples by the example of the southern regions of the Russian Far East
    Computer Research and Modeling, 2016, v. 8, no. 5, pp. 787-801

    The article focuses on a nonlinear mathematical model that describes the interaction of the different age groups of the employed population. The interactions are treated by analogy with population relationship (competition, discrimination, assistance, oppression, etc). Under interaction of peoples we mean the generalized social and economic mechanisms that cause related changes in the number of employees of different age groups. Three age groups of the employed population are considered. It is young specialists (15–29 years), workers with experience (30–49 years), the employees of pre-retirement and retirement age (50 and older). The estimation of model’s parameters for the southern regions of the Far Eastern Federal District (FEFD) is executed by statistical data. Analysis of model scenarios allows us to conclude the observed number fluctuations of the different ages employees on the background of a stable total employed population may be a consequence of complex interactions between these groups of peoples. Computational experiments with the obtained values of the parameters allowed us to calculate the rate of decline and the aging of the working population and to determine the nature of the interaction between the age groups of employees that are not directly as reflected in the statistics. It was found that in FEFD the employed of 50 years and older are discriminated against by the young workers under 29, employed up to 29 and 30–49 years are in a partnership. It is shown in most developed regions (Primorsky and Khabarovsk Krai) there is “uniform” competition among different age groups of the employed population. For Primorsky Krai we were able to identify the mixing effect dynamics. It is a typical situation for systems in a state of structural adjustment. This effect is reflected in the fact the long cycles of employed population form with a significant decrease in migration inflows of employees 30–49 years. Besides, the change of migration is accompanied by a change of interaction type — from employment discrimination by the oldest of middle generation to discrimination by the middle of older generation. In less developed regions (Amur, Magadan and Jewish Autonomous Regions) there are lower values of migration balance of almost all age groups and discrimination by young workers up 29 years of other age groups and employment discrimination 30–49 years of the older generation.

    Views (last year): 4. Citations: 3 (RSCI).
  3. Bagaev R.A., Golubev V.I., Golubeva Y.A.
    Full-wave 3D earthquake simulation using the double-couple model and the grid-characteristic method
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1061-1067

    One of the destroying natural processes is the initiation of the regional seismic activity. It leads to a large number of human deaths. Much effort has been made to develop precise and robust methods for the estimation of the seismic stability of buildings. One of the most common approaches is the natural frequency method. The obvious drawback of this approach is a low precision due to the model oversimplification. The other method is a detailed simulation of dynamic processes using the finite-element method. Unfortunately, the quality of simulations is not enough due to the difficulty of setting the correct free boundary condition. That is why the development of new numerical methods for seismic stability problems is a high priority nowadays.

    The present work is devoted to the study of spatial dynamic processes occurring in geological medium during an earthquake. We describe a method for simulating seismic wave propagation from the hypocenter to the day surface. To describe physical processes, we use a system of partial differential equations for a linearly elastic body of the second order, which is solved numerically by a grid-characteristic method on parallelepiped meshes. The widely used geological hypocenter model, called the “double-couple” model, was incorporated into this numerical algorithm. In this case, any heterogeneities, such as geological layers with curvilinear boundaries, gas and fluid-filled cracks, fault planes, etc., may be explicitly taken into account.

    In this paper, seismic waves emitted during the earthquake initiation process are numerically simulated. Two different models are used: the homogeneous half-space and the multilayered geological massif with the day surface. All of their parameters are set based on previously published scientific articles. The adequate coincidence of the simulation results is obtained. And discrepancies may be explained by differences in numerical methods used. The numerical approach described can be extended to more complex physical models of geological media.

  4. Pletnev N.V.
    Fast adaptive by constants of strong-convexity and Lipschitz for gradient first order methods
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 947-963

    The work is devoted to the construction of efficient and applicable to real tasks first-order methods of convex optimization, that is, using only values of the target function and its derivatives. Construction uses OGMG, fast gradient method which is optimal by complexity, but requires to know the Lipschitz constant for gradient and the strong convexity constant to determine the number of steps and step length. This requirement makes practical usage very hard. An adaptive on the constant for strong convexity algorithm ACGM is proposed, based on restarts of the OGM-G with update of the strong convexity constant estimate, and an adaptive on the Lipschitz constant for gradient ALGM, in which the use of OGM-G restarts is supplemented by the selection of the Lipschitz constant with verification of the smoothness conditions used in the universal gradient descent method. This eliminates the disadvantages of the original method associated with the need to know these constants, which makes practical usage possible. Optimality of estimates for the complexity of the constructed algorithms is proved. To verify the results obtained, experiments on model functions and real tasks from machine learning are carried out.

  5. Grachev V.A., Nayshtut Yu.S.
    Buckling prediction for shallow convex shells based on the analysis of nonlinear oscillations
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1189-1205

    Buckling problems of thin elastic shells have become relevant again because of the discrepancies between the standards in many countries on how to estimate loads causing buckling of shallow shells and the results of the experiments on thinwalled aviation structures made of high-strength alloys. The main contradiction is as follows: the ultimate internal stresses at shell buckling (collapsing) turn out to be lower than the ones predicted by the adopted design theory used in the USA and European standards. The current regulations are based on the static theory of shallow shells that was put forward in the 1930s: within the nonlinear theory of elasticity for thin-walled structures there are stable solutions that significantly differ from the forms of equilibrium typical to small initial loads. The minimum load (the lowest critical load) when there is an alternative form of equilibrium was used as a maximum permissible one. In the 1970s it was recognized that this approach is unacceptable for complex loadings. Such cases were not practically relevant in the past while now they occur with thinner structures used under complex conditions. Therefore, the initial theory on bearing capacity assessments needs to be revised. The recent mathematical results that proved asymptotic proximity of the estimates based on two analyses (the three-dimensional dynamic theory of elasticity and the dynamic theory of shallow convex shells) could be used as a theory basis. This paper starts with the setting of the dynamic theory of shallow shells that comes down to one resolving integrodifferential equation (once the special Green function is constructed). It is shown that the obtained nonlinear equation allows for separation of variables and has numerous time-period solutions that meet the Duffing equation with “a soft spring”. This equation has been thoroughly studied; its numerical analysis enables finding an amplitude and an oscillation period depending on the properties of the Green function. If the shell is oscillated with the trial time-harmonic load, the movement of the surface points could be measured at the maximum amplitude. The study proposes an experimental set-up where resonance oscillations are generated with the trial load normal to the surface. The experimental measurements of the shell movements, the amplitude and the oscillation period make it possible to estimate the safety factor of the structure bearing capacity with non-destructive methods under operating conditions.

  6. Ougolnitsky G.A., Usov A.B.
    Game-theoretic model of coordinations of interests at innovative development of corporations
    Computer Research and Modeling, 2016, v. 8, no. 4, pp. 673-684

    Dynamic game theoretic models of the corporative innovative development are investigated. The proposed models are based on concordance of private and public interests of agents. It is supposed that the structure of interests of each agent includes both private (personal interests) and public (interests of the whole company connected with its innovative development first) components. The agents allocate their personal resources between these two directions. The system dynamics is described by a difference (not differential) equation. The proposed model of innovative development is studied by simulation and the method of enumeration of the domains of feasible controls with a constant step. The main contribution of the paper consists in comparative analysis of efficiency of the methods of hierarchical control (compulsion or impulsion) for information structures of Stackelberg or Germeier (four structures) by means of the indices of system compatibility. The proposed model is a universal one and can be used for a scientifically grounded support of the programs of innovative development of any economic firm. The features of a specific company are considered in the process of model identification (a determination of the specific classes of model functions and numerical values of its parameters) which forms a separate complex problem and requires an analysis of the statistical data and expert estimations. The following assumptions about information rules of the hierarchical game are accepted: all players use open-loop strategies; the leader chooses and reports to the followers some values of administrative (compulsion) or economic (impulsion) control variables which can be only functions of time (Stackelberg games) or depend also on the followers’ controls (Germeier games); given the leader’s strategies all followers simultaneously and independently choose their strategies that gives a Nash equilibrium in the followers’ game. For a finite number of iterations the proposed algorithm of simulation modeling allows to build an approximate solution of the model or to conclude that it doesn’t exist. A reliability and efficiency of the proposed algorithm follow from the properties of the scenario method and the method of a direct ordered enumeration with a constant step. Some comprehensive conclusions about the comparative efficiency of methods of hierarchical control of innovations are received.

    Views (last year): 9. Citations: 6 (RSCI).
  7. We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.

    Views (last year): 14. Citations: 1 (RSCI).
  8. Gasnikov A.V., Kubentayeva M.B.
    Searching stochastic equilibria in transport networks by universal primal-dual gradient method
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345

    We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.

    Views (last year): 28.
  9. Blanter E.M., Elaeva M.S., Shnirman M.G.
    Synchronization of the asymmetrical system with three non-identical Kuramoto oscillators: models of solar meridional circulation
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 345-356

    Kuramoto model of non-linearly coupled oscillators provides a simple but effective approach to the study of the synchronization phenomenon in complex systems. In the present article we consider a particular Kuramoto model with three non-identical oscillators associated with a multi-cell radial profile of the solar meridional circulation. The top and the bottom oscillators are coupled through the middle one. The main difference of the present Kuramoto model from the previous ones consists in the non-identical coupling: coupling coefficients which tie the middle oscillator with the top and the bottom ones are different. We investigate how the value of the coupling asymmetry of the middle oscillator influences the synchronization. In the present model the synchronization conditions appear to be different the classical Kuramoto model allowing the synchronization to be reached with weaker coupling. We perform a reconstruction of coupling coefficients from the phase difference between the top and the bottom oscillators, assuming that the synchronization is reached and the natural frequencies are known. The absolute cumulative coupling is uniquely determined by the phase difference between the top and the bottom oscillators and the coupling asymmetry of the middle oscillator. In general case, higher values of the coupling asymmetry of the middle oscillator correspond to lower cumulative coupling. A unique coupling reconstruction with unknown coupling asymmetry is possible in general case only for the weak cumulative coupling. Deviations from the general case are discussed. We perform a model simulation with natural frequencies estimated from the velocities of the solar meridional flow. Heliseismological observations of the deep flow may be attributed either to the middle cell or to the deep one. We discuss the difference between these two cases in terms of the coupling reconstruction.

  10. In this work we have developed a new efficient program for the numerical simulation of 3D global chemical transport on an adaptive finite-difference grid which allows us to concentrate grid points in the regions where flow variables sharply change and coarsen the grid in the regions of their smooth behavior, which significantly minimizes the grid size. We represent the adaptive grid with a combination of several dynamic (tree, linked list) and static (array) data structures. The dynamic data structures are used for a grid reconstruction, and the calculations of the flow variables are based on the static data structures. The introduction of the static data structures allows us to speed up the program by a factor of 2 in comparison with the conventional approach to the grid representation with only dynamic data structures.

    We wrote and tested our program on a computer with 6 CPU cores. Using the computer microarchitecture simulator gem5, we estimated the scalability property of the program on a significantly greater number of cores (up to 32), using several models of a computer system with the design “computational cores – cache – main memory”. It has been shown that the microarchitecture of a computer system has a significant impact on the scalability property, i.e. the same program demonstrates different efficiency on different computer microarchitectures. For example, we have a speedup of 14.2 on a processor with 32 cores and 2 cache levels, but we have a speedup of 22.2 on a processor with 32 cores and 3 cache levels. The execution time of a program on a computer model in gem5 is 104–105 times greater than the execution time of the same program on a real computer and equals 1.5 hours for the most complex model.

    Also in this work we describe how to configure gem5 and how to perform simulations with gem5 in the most optimal way.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"