Результаты поиска по 'generalized solutions':
Найдено статей: 74
  1. Vrazhnov D.A., Shapovalov A.V., Nikolaev V.V.
    On quality of object tracking algorithms
    Computer Research and Modeling, 2012, v. 4, no. 2, pp. 303-313

    Object movement on a video is classified on the regular (object movement on continuous trajectory) and non-regular (trajectory breaks due to object occlusions by other objects, object jumps and others). In the case of regular object movement a tracker is considered as a dynamical system that enables to use conditions of existence, uniqueness, and stability of the dynamical system solution. This condition is used as the correctness criterion of the tracking process. Also, quantitative criterion for correct mean-shift tracking assessment based on the Lipchitz condition is suggested. Results are generalized for arbitrary tracker.

    Views (last year): 20. Citations: 9 (RSCI).
  2. Grachev V.A., Nayshtut Yu.S.
    Variational principle for shape memory solids under variable external forces and temperatures
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 541-555

    The quasistatic deformation problem for shape memory alloys is reviewed within the phenomenological mechanics of solids without microphysics analysis. The phenomenological approach is based on comparison of two material deformation diagrams. The first diagram corresponds to the active proportional loading when the alloy behaves as an ideal elastoplastic material; the residual strain is observed after unloading. The second diagram is relevant to the case when the deformed sample is heated to a certain temperature for each alloy. The initial shape is restored: the reverse distortion matches deformations on the first diagram, except for the sign. Because the first step of distortion can be described with the variational principle, for which the existence of the generalized solutions is proved under arbitrary loading, it becomes clear how to explain the reverse distortion within the slightly modified theory of plasticity. The simply connected surface of loading needs to be replaced with the doubly connected one, and the variational principle needs to be updated with two laws of thermodynamics and the principle of orthogonality for thermodynamic forces and streams. In this case it is not difficult to prove the existence of solutions either. The successful application of the theory of plasticity under the constant temperature causes the need to obtain a similar result for a more general case of variable external forces and temperatures. The paper studies the ideal elastoplastic von Mises model at linear strain rates. Taking into account hardening and arbitrary loading surface does not cause any additional difficulties.

    The extended variational principle of the Reissner type is defined. Together with the laws of thermal plasticity it enables to prove the existence of the generalized solutions for three-dimensional bodies made of shape memory materials. The main issue to resolve is a challenge to choose a functional space for the rates and deformations of the continuum points. The space of bounded deformation, which is the main instrument of the mathematical theory of plasticity, serves this purpose in the paper. The proving process shows that the choice of the functional spaces used in the paper is not the only one. The study of other possible problem settings for the extended variational principle and search for regularity of generalized solutions seem an interesting challenge for future research.

  3. Danilova M.Y., Malinovskiy G.S.
    Averaged heavy-ball method
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 277-308

    First-order optimization methods are workhorses in a wide range of modern applications in economics, physics, biology, machine learning, control, and other fields. Among other first-order methods accelerated and momentum ones obtain special attention because of their practical efficiency. The heavy-ball method (HB) is one of the first momentum methods. The method was proposed in 1964 and the first analysis was conducted for quadratic strongly convex functions. Since then a number of variations of HB have been proposed and analyzed. In particular, HB is known for its simplicity in implementation and its performance on nonconvex problems. However, as other momentum methods, it has nonmonotone behavior, and for optimal parameters, the method suffers from the so-called peak effect. To address this issue, in this paper, we consider an averaged version of the heavy-ball method (AHB). We show that for quadratic problems AHB has a smaller maximal deviation from the solution than HB. Moreover, for general convex and strongly convex functions, we prove non-accelerated rates of global convergence of AHB, its weighted version WAHB, and for AHB with restarts R-AHB. To the best of our knowledge, such guarantees for HB with averaging were not explicitly proven for strongly convex problems in the existing works. Finally, we conduct several numerical experiments on minimizing quadratic and nonquadratic functions to demonstrate the advantages of using averaging for HB. Moreover, we also tested one more modification of AHB called the tail-averaged heavy-ball method (TAHB). In the experiments, we observed that HB with a properly adjusted averaging scheme converges faster than HB without averaging and has smaller oscillations.

  4. Pletnev N.V., Matyukhin V.V.
    On the modification of the method of component descent for solving some inverse problems of mathematical physics
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 301-316

    The article is devoted to solving ill-posed problems of mathematical physics for elliptic and parabolic equations, such as the Cauchy problem for the Helmholtz equation and the retrospective Cauchy problem for the heat equation with constant coefficients. These problems are reduced to problems of convex optimization in Hilbert space. The gradients of the corresponding functionals are calculated approximately by solving two well-posed problems. A new method is proposed for solving the optimization problems under study, it is component-by-component descent in the basis of eigenfunctions of a self-adjoint operator associated with the problem. If it was possible to calculate the gradient exactly, this method would give an arbitrarily exact solution of the problem, depending on the number of considered elements of the basis. In real cases, the inaccuracy of calculations leads to a violation of monotonicity, which requires the use of restarts and limits the achievable quality. The paper presents the results of experiments confirming the effectiveness of the constructed method. It is determined that the new approach is superior to approaches based on the use of gradient optimization methods: it allows to achieve better quality of solution with significantly less computational resources. It is assumed that the constructed method can be generalized to other problems.

  5. Minkevich I.G.
    Incomplete systems of linear equations with restrictions of variable values
    Computer Research and Modeling, 2014, v. 6, no. 5, pp. 719-745

    The problem is formulated for description of objects having various natures which uses a system of linear equations with variable number exceeding the number of the equations. An important feature of this problem that substantially complicates its solving is the existing of restrictions imposed on a number of the variables. In particular, the choice of biochemical reaction aggregate that converts a preset substrate (a feedstock) into a preset product belongs to this kind of problems. In this case, unknown variables are the rates of biochemical reactions which form a vector to be determined. Components of this vector are subdivided into two groups: 1) the defined components, $\vec{y}$; 2) those dependent on the defined ones, $\vec{x}$. Possible configurations of the domain of $\vec{y}$ values permitted by restrictions imposed upon $\vec{x}$ components have been studied. It has been found that a part of restrictions may be superfluous and, therefore, unnecessary for the problem solving. Situations are analyzed when two or more $\vec{x}$ restrictions result in strict interconnections between $\vec{y}$ components. Methods of search of the basis solutions which take into account the peculiarities of this problem are described. Statement of the general problem and properties of its solutions are illustrated using a biochemical example.

    Views (last year): 24. Citations: 3 (RSCI).
  6. Krat Y.G., Potapov I.I.
    Bottom stability in closed conduits
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1061-1068

    In this paper on the basis of the riverbed model proposed earlier the one-dimensional stability problem of closed flow channel with sandy bed is solved. The feature of the investigated problem is used original equation of riverbed deformations, which takes into account the influence of mechanical and granulometric bed material characteristics and the bed slope when riverbed analyzing. Another feature of the discussed problem is the consideration together with shear stress influence normal stress influence when investigating the riverbed instability. The analytical dependence determined the wave length of fast-growing bed perturbations is obtained from the solution of the sandy bed stability problem for closed flow channel. The analysis of the obtained analytical dependence is performed. It is shown that the obtained dependence generalizes the row of well-known empirical formulas: Coleman, Shulyak and Bagnold. The structure of the obtained analytical dependence denotes the existence of two hydrodynamic regimes characterized by the Froude number, at which the bed perturbations growth can strongly or weakly depend on the Froude number. Considering a natural stochasticity of the waves movement process and the presence of a definition domain of the solution with a weak dependence on the Froude numbers it can be concluded that the experimental observation of the of the bed waves movement development should lead to the data acquisition with a significant dispersion and it occurs in reality.

    Views (last year): 1. Citations: 2 (RSCI).
  7. We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.

    Views (last year): 14. Citations: 1 (RSCI).
  8. Ivanov A.M., Khokhlov N.I.
    Parallel implementation of the grid-characteristic method in the case of explicit contact boundaries
    Computer Research and Modeling, 2018, v. 10, no. 5, pp. 667-678

    We consider an application of the Message Passing Interface (MPI) technology for parallelization of the program code which solves equation of the linear elasticity theory. The solution of this equation describes the propagation of elastic waves in demormable rigid bodies. The solution of such direct problem of seismic wave propagation is of interest in seismics and geophysics. Our implementation of solver uses grid-characteristic method to make simulations. We consider technique to reduce time of communication between MPI processes during the simulation. This is important when it is necessary to conduct modeling in complex problem formulations, and still maintain the high level of parallelism effectiveness, even when thousands of processes are used. A solution of the problem of effective communication is extremely important when several computational grids with arbirtrary geometry of contacts between them are used in the calculation. The complexity of this task increases if an independent distribution of the grid nodes between processes is allowed. In this paper, a generalized approach is developed for processing contact conditions in terms of nodes reinterpolation from a given section of one grid to a certain area of the second grid. An efficient way of parallelization and establishing effective interprocess communications is proposed. For provided example problems we provide wave fileds and seismograms for both 2D and 3D formulations. It is shown that the algorithm can be realized both on Cartesian and on structured (curvilinear) computational grids. The considered statements demonstrate the possibility of carrying out calculations taking into account the surface topographies and curvilinear geometry of curvilinear contacts between the geological layers. Application of curvilinear grids allows to obtain more accurate results than when calculating only using Cartesian grids. The resulting parallelization efficiency is almost 100% up to 4096 processes (we used 128 processes as a basis to find efficiency). With number of processes larger than 4096, an expected gradual decrease in efficiency is observed. The rate of decline is not great, so at 16384 processes the parallelization efficiency remains at 80%.

    Views (last year): 18.
  9. Fomin A.A., Fomina L.N.
    Effect of buoyancy force on mixed convection of a variable density fluid in a square lid-driven cavity
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 575-595

    The paper considers the problem of stationary mixed convection and heat transfer of a viscous heatconducting fluid in a plane square lid-driven cavity. The hot top cover of the cavity has any temperature $T_\mathrm{H}$ and cold bottom wall has temperature $T_\mathrm{0} (T_\mathrm{H} > T_\mathrm{0})$, whereas in contrast the side walls are insulated. The fact that the fluid density can take arbitrary values depending on the amount of overheating of the cavity cover is a feature of the problem. The mathematical formulation includes the Navier–Stokes equations in the ’velocity–pressure’ variables and the heat balance equation which take into account the incompressibility of the fluid flow and the influence of volumetric buoyancy force. The difference approximation of the original differential equations has been performed by the control volume method. Numerical solutions of the problem have been obtained on the $501 \times 501$ grid for the following values of similarity parameters: Prandtl number Pr = 0.70; Reynolds number Re = 100 and 1000; Richardson number Ri = 0.1, 1, and 10; and the relative cover overheating $(T_\mathrm{H}-T_\mathrm{0})/T_\mathrm{0} = 0, 1, 2, 3$. Detailed flow patterns in the form of streamlines and isotherms of relative overheating of the fluid flow are given in the work. It is shown that the increase in the value of the Richardson number (the increase in the influence of buoyancy force) leads to a fundamental change in the structure of the liquid stream. It is also found out that taking into account the variability of the liquid density leads to weakening of the influence of Ri growth on the transformation of the flow structure. The change in density in a closed volume is the cause of this weakening, since it always leads to the existence of zones with negative buoyancy in the presence of a volumetric force. As a consequence, the competition of positive and negative volumetric forces leads in general to weakening of the buoyancy effect. The behaviors of heat exchange coefficient (Nusselt number) and coefficient of friction along the bottom wall of the cavity depending on the parameters of the problem are also analyzed. It is revealed that the greater the values of the Richardson number are, the greater, ceteris paribus, the influence of density variation on these coefficients is.

  10. Bozhko A.N.
    Hypergraph approach in the decomposition of complex technical systems
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1007-1022

    The article considers a mathematical model of decomposition of a complex product into assembly units. This is an important engineering problem, which affects the organization of discrete production and its operational management. A review of modern approaches to mathematical modeling and automated computer-aided of decompositions is given. In them, graphs, networks, matrices, etc. serve as mathematical models of structures of technical systems. These models describe the mechanical structure as a binary relation on a set of system elements. The geometrical coordination and integrity of machines and mechanical devices during the manufacturing process is achieved by means of basing. In general, basing can be performed on several elements simultaneously. Therefore, it represents a variable arity relation, which can not be correctly described in terms of binary mathematical structures. A new hypergraph model of mechanical structure of technical system is described. This model allows to give an adequate formalization of assembly operations and processes. Assembly operations which are carried out by two working bodies and consist in realization of mechanical connections are considered. Such operations are called coherent and sequential. This is the prevailing type of operations in modern industrial practice. It is shown that the mathematical description of such operation is normal contraction of an edge of the hypergraph. A sequence of contractions transforming the hypergraph into a point is a mathematical model of the assembly process. Two important theorems on the properties of contractible hypergraphs and their subgraphs proved by the author are presented. The concept of $s$-hypergraphs is introduced. $S$-hypergraphs are the correct mathematical models of mechanical structures of any assembled technical systems. Decomposition of a product into assembly units is defined as cutting of an $s$-hypergraph into $s$-subgraphs. The cutting problem is described in terms of discrete mathematical programming. Mathematical models of structural, topological and technological constraints are obtained. The objective functions are proposed that formalize the optimal choice of design solutions in various situations. The developed mathematical model of product decomposition is flexible and open. It allows for extensions that take into account the characteristics of the product and its production.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"