Результаты поиска по 'first term':
Найдено статей: 31
  1. The main aim, formulated in the first part of article, is to carry out detailed numerical studies of the chemical, ionization, optical, and temperature characteristics of the lower ionosphere perturbed by powerful radio emission. The brief review of the main experimental and theoretical researches of physical phenomena occurring in the ionosphere when it is heated by high-power high-frequency radio waves from heating facilities is given. The decisive role of the $D$-region of the ionosphere in the absorption of radio beam energy is shown. A detailed analysis of kinetic processes in the disturbed $D$-region, which is the most complex in kinetic terms, has been performed. It is shown that for a complete description of the ionization-chemical and optical characteristics of the disturbed region, it is necessary to take into account more than 70 components, which, according to their main physical content, can be conveniently divided into five groups. A kinetic model is presented to describe changes in the concentrations of components interacting (the total number of reactions is 259). The system of kinetic equations was solved using a semi-implicit numerical method specially adapted to such problems. Based on the proposed structure, a software package was developed in which the algorithm scheme allowed changing both the content of individual program blocks and their number, which made it possible to conduct detailed numerical studies of individual processes in the behavior of the parameters of the perturbed region. The complete numerical algorithm is based on the two-temperature approximation, in which the main attention was paid to the calculation of the electron temperature, since its behavior is determined by inelastic kinetic processes involving electrons. The formulation of the problem is of a rather general nature and makes it possible to calculate the parameters of the disturbed ionosphere in a wide range of powers and frequencies of radio emission. Based on the developed numerical technique, it is possible to study a wide range of phenomena both in the natural and disturbed ionosphere.

  2. Popov D.I.
    Calibration of an elastostatic manipulator model using AI-based design of experiment
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1535-1553

    This paper demonstrates the advantages of using artificial intelligence algorithms for the design of experiment theory, which makes possible to improve the accuracy of parameter identification for an elastostatic robot model. Design of experiment for a robot consists of the optimal configuration-external force pairs for the identification algorithms and can be described by several main stages. At the first stage, an elastostatic model of the robot is created, taking into account all possible mechanical compliances. The second stage selects the objective function, which can be represented by both classical optimality criteria and criteria defined by the desired application of the robot. At the third stage the optimal measurement configurations are found using numerical optimization. The fourth stage measures the position of the robot body in the obtained configurations under the influence of an external force. At the last, fifth stage, the elastostatic parameters of the manipulator are identified based on the measured data.

    The objective function required to finding the optimal configurations for industrial robot calibration is constrained by mechanical limits both on the part of the possible angles of rotation of the robot’s joints and on the part of the possible applied forces. The solution of this multidimensional and constrained problem is not simple, therefore it is proposed to use approaches based on artificial intelligence. To find the minimum of the objective function, the following methods, also sometimes called heuristics, were used: genetic algorithms, particle swarm optimization, simulated annealing algorithm, etc. The obtained results were analyzed in terms of the time required to obtain the configurations, the optimal value, as well as the final accuracy after applying the calibration. The comparison showed the advantages of the considered optimization techniques based on artificial intelligence over the classical methods of finding the optimal value. The results of this work allow us to reduce the time spent on calibration and increase the positioning accuracy of the robot’s end-effector after calibration for contact operations with high loads, such as machining and incremental forming.

  3. Laser damage to transparent solids is a major limiting factor output power of laser systems. For laser rangefinders, the most likely destruction cause of elements of the optical system (lenses, mirrors) actually, as a rule, somewhat dusty, is not an optical breakdown as a result of avalanche, but such a thermal effect on the dust speck deposited on an element of the optical system (EOS), which leads to its ignition. It is the ignition of a speck of dust that initiates the process of EOS damage.

    The corresponding model of this process leading to the ignition of a speck of dust takes into account the nonlinear Stefan –Boltzmann law of thermal radiation and the infinite thermal effect of periodic radiation on the EOS and the speck of dust. This model is described by a nonlinear system of differential equations for two functions: the EOS temperature and the dust particle temperature. It is proved that due to the accumulating effect of periodic thermal action, the process of reaching the dust speck ignition temperature occurs almost at any a priori possible changes in this process of the thermophysical parameters of the EOS and the dust speck, as well as the heat exchange coefficients between them and the surrounding air. Averaging these parameters over the variables related to both the volume and the surfaces of the dust speck and the EOS is correct under the natural constraints specified in the paper. The entire really significant spectrum of thermophysical parameters is covered thanks to the use of dimensionless units in the problem (including numerical results).

    A thorough mathematical study of the corresponding nonlinear system of differential equations made it possible for the first time for the general case of thermophysical parameters and characteristics of the thermal effect of periodic laser radiation to find a formula for the value of the permissible radiation intensity that does not lead to the destruction of the EOS as a result of the ignition of a speck of dust deposited on the EOS. The theoretical value of the permissible intensity found in the general case in the special case of the data from the Grasse laser ranging station (south of France) almost matches that experimentally observed in the observatory.

    In parallel with the solution of the main problem, we derive a formula for the power absorption coefficient of laser radiation by an EOS expressed in terms of four dimensionless parameters: the relative intensity of laser radiation, the relative illumination of the EOS, the relative heat transfer coefficient from the EOS to the surrounding air, and the relative steady-state temperature of the EOS.

  4. Malkov S.Yu., Korotayev A.V., Davydova O.I.
    World dynamics as an object of modeling (for the fiftieth anniversary of the first report to the Club of Rome)
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1371-1394

    In the last quarter of the twentieth century, the nature of global demographic and economic development began to change rapidly: the continuously accelerating growth of the main characteristics that took place over the previous two hundred years was replaced by a sharp slowdown. In the context of these changes, the role of a long-term forecast of global dynamics is increasing. At the same time, the forecast should be based not on inertial projection of past trends into future periods, but on mathematical modeling of fundamental patterns of historical development. The article presents preliminary results of research on mathematical modeling and forecasting of global demographic and economic dynamics based on this approach. The basic dynamic equations reflecting this dynamics are proposed, the modification of these equations in relation to different historical epochs is justified. For each historical epoch, based on the analysis of the corresponding system of equations, a phase portrait was determined and its features were analyzed. Based on this analysis, conclusions were drawn about the patterns of world development in the period under review.

    It is shown that mathematical description of technology development is important for modeling historical dynamics. A method for describing technological dynamics is proposed, on the basis of which the corresponding mathematical equations are proposed.

    Three stages of historical development are considered: the stage of agrarian society (before the beginning of the XIX century), the stage of industrial society (XIX–XX centuries) and the modern era. The proposed mathematical model shows that an agrarian society is characterized by cyclical demographic and economic dynamics, while an industrial society is characterized by an increase in demographic and economic characteristics close to hyperbolic.

    The results of mathematical modeling have shown that humanity is currently moving to a fundamentally new phase of historical development. There is a slowdown in growth and the transition of human society into a new phase state, the shape of which has not yet been determined. Various options for further development are considered.

  5. Stepanyan I.V.
    Biomathematical system of the nucleic acids description
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 417-434

    The article is devoted to the application of various methods of mathematical analysis, search for patterns and studying the composition of nucleotides in DNA sequences at the genomic level. New methods of mathematical biology that made it possible to detect and visualize the hidden ordering of genetic nucleotide sequences located in the chromosomes of cells of living organisms described. The research was based on the work on algebraic biology of the doctor of physical and mathematical sciences S. V. Petukhov, who first introduced and justified new algebras and hypercomplex numerical systems describing genetic phenomena. This paper describes a new phase in the development of matrix methods in genetics for studying the properties of nucleotide sequences (and their physicochemical parameters), built on the principles of finite geometry. The aim of the study is to demonstrate the capabilities of new algorithms and discuss the discovered properties of genetic DNA and RNA molecules. The study includes three stages: parameterization, scaling, and visualization. Parametrization is the determination of the parameters taken into account, which are based on the structural and physicochemical properties of nucleotides as elementary components of the genome. Scaling plays the role of “focusing” and allows you to explore genetic structures at various scales. Visualization includes the selection of the axes of the coordinate system and the method of visual display. The algorithms presented in this work are put forward as a new toolkit for the development of research software for the analysis of long nucleotide sequences with the ability to display genomes in parametric spaces of various dimensions. One of the significant results of the study is that new criteria were obtained for the classification of the genomes of various living organisms to identify interspecific relationships. The new concept allows visually and numerically assessing the variability of the physicochemical parameters of nucleotide sequences. This concept also allows one to substantiate the relationship between the parameters of DNA and RNA molecules with fractal geometric mosaics, reveals the ordering and symmetry of polynucleotides, as well as their noise immunity. The results obtained justified the introduction of new terms: “genometry” as a methodology of computational strategies and “genometrica” as specific parameters of a particular genome or nucleotide sequence. In connection with the results obtained, biosemiotics and hierarchical levels of organization of living matter are raised.

  6. Aksenov A.A., Kashirin V.S., Timushev S.F., Shaporenko E.V.
    Development of acoustic-vortex decomposition method for car tyre noise modelling
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 979-993

    Road noise is one of the key issues in maintaining high environmental standards. At speeds between 50 and 120 km/h, tires are the main source of noise generated by a moving vehicle. It is well known that either the interaction between the tire tread and the road surface or some internal dynamic effects are responsible for tire noise and vibration. This paper discusses the application of a new method for modelling the generation and propagation of sound during tire motion, based on the application of the so-called acoustic-vortex decomposition. Currently, the application of the Lighthill equation and the aeroacoustics analogy are the main approaches used to model tire noise. The aeroacoustics analogy, in solving the problem of separating acoustic and vortex (pseudo-sound) modes of vibration, is not a mathematically rigorous formulation for deriving the source (righthand side) of the acoustic wave equation. In the development of the acoustic-vortex decomposition method, a mathematically rigorous transformation of the equations of motion of a compressible medium is performed to obtain an inhomogeneous wave equation with respect to static enthalpy pulsations with a source term that de-pends on the velocity field of the vortex mode. In this case, the near-field pressure fluctuations are the sum of acoustic fluctuations and pseudo-sound. Thus, the acoustic-vortex decomposition method allows to adequately modeling the acoustic field and the dynamic loads that generate tire vibration, providing a complete solution to the problem of modelling tire noise, which is the result of its turbulent flow with the generation of vortex sound, as well as the dynamic loads and noise emission due to tire vibration. The method is first implemented and test-ed in the FlowVision software package. The results obtained with FlowVision are compared with those obtained with the LMS Virtual.Lab Acoustics package and a number of differences in the acoustic field are highlighted.

  7. Stonyakin F.S., Savchuk O.S., Baran I.V., Alkousa M.S., Titov A.A.
    Analogues of the relative strong convexity condition for relatively smooth problems and adaptive gradient-type methods
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 413-432

    This paper is devoted to some variants of improving the convergence rate guarantees of the gradient-type algorithms for relatively smooth and relatively Lipschitz-continuous problems in the case of additional information about some analogues of the strong convexity of the objective function. We consider two classes of problems, namely, convex problems with a relative functional growth condition, and problems (generally, non-convex) with an analogue of the Polyak – Lojasiewicz gradient dominance condition with respect to Bregman divergence. For the first type of problems, we propose two restart schemes for the gradient type methods and justify theoretical estimates of the convergence of two algorithms with adaptively chosen parameters corresponding to the relative smoothness or Lipschitz property of the objective function. The first of these algorithms is simpler in terms of the stopping criterion from the iteration, but for this algorithm, the near-optimal computational guarantees are justified only on the class of relatively Lipschitz-continuous problems. The restart procedure of another algorithm, in its turn, allowed us to obtain more universal theoretical results. We proved a near-optimal estimate of the complexity on the class of convex relatively Lipschitz continuous problems with a functional growth condition. We also obtained linear convergence rate guarantees on the class of relatively smooth problems with a functional growth condition. For a class of problems with an analogue of the gradient dominance condition with respect to the Bregman divergence, estimates of the quality of the output solution were obtained using adaptively selected parameters. We also present the results of some computational experiments illustrating the performance of the methods for the second approach at the conclusion of the paper. As examples, we considered a linear inverse Poisson problem (minimizing the Kullback – Leibler divergence), its regularized version which allows guaranteeing a relative strong convexity of the objective function, as well as an example of a relatively smooth and relatively strongly convex problem. In particular, calculations show that a relatively strongly convex function may not satisfy the relative variant of the gradient dominance condition.

  8. Pletnev N.V., Dvurechensky P.E., Gasnikov A.V.
    Application of gradient optimization methods to solve the Cauchy problem for the Helmholtz equation
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 417-444

    The article is devoted to studying the application of convex optimization methods to solve the Cauchy problem for the Helmholtz equation, which is ill-posed since the equation belongs to the elliptic type. The Cauchy problem is formulated as an inverse problem and is reduced to a convex optimization problem in a Hilbert space. The functional to be optimized and its gradient are calculated using the solution of boundary value problems, which, in turn, are well-posed and can be approximately solved by standard numerical methods, such as finite-difference schemes and Fourier series expansions. The convergence of the applied fast gradient method and the quality of the solution obtained in this way are experimentally investigated. The experiment shows that the accelerated gradient method — the Similar Triangle Method — converges faster than the non-accelerated method. Theorems on the computational complexity of the resulting algorithms are formulated and proved. It is found that Fourier’s series expansions are better than finite-difference schemes in terms of the speed of calculations and improve the quality of the solution obtained. An attempt was made to use restarts of the Similar Triangle Method after halving the residual of the functional. In this case, the convergence does not improve, which confirms the absence of strong convexity. The experiments show that the inaccuracy of the calculations is more adequately described by the additive concept of the noise in the first-order oracle. This factor limits the achievable quality of the solution, but the error does not accumulate. According to the results obtained, the use of accelerated gradient optimization methods can be the way to solve inverse problems effectively.

  9. Tomonin Y.D., Tominin V.D., Borodich E.D., Kovalev D.A., Dvurechensky P.E., Gasnikov A.V., Chukanov S.V.
    On Accelerated Methods for Saddle-Point Problems with Composite Structure
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 433-467

    We consider strongly-convex-strongly-concave saddle-point problems with general non-bilinear objective and different condition numbers with respect to the primal and dual variables. First, we consider such problems with smooth composite terms, one of which has finite-sum structure. For this setting we propose a variance reduction algorithm with complexity estimates superior to the existing bounds in the literature. Second, we consider finite-sum saddle-point problems with composite terms and propose several algorithms depending on the properties of the composite terms. When the composite terms are smooth we obtain better complexity bounds than the ones in the literature, including the bounds of a recently proposed nearly-optimal algorithms which do not consider the composite structure of the problem. If the composite terms are prox-friendly, we propose a variance reduction algorithm that, on the one hand, is accelerated compared to existing variance reduction algorithms and, on the other hand, provides in the composite setting similar complexity bounds to the nearly-optimal algorithm which is designed for noncomposite setting. Besides, our algorithms allow one to separate the complexity bounds, i. e. estimate, for each part of the objective separately, the number of oracle calls that is sufficient to achieve a given accuracy. This is important since different parts can have different arithmetic complexity of the oracle, and it is desired to call expensive oracles less often than cheap oracles. The key thing to all these results is our general framework for saddle-point problems, which may be of independent interest. This framework, in turn is based on our proposed Accelerated Meta-Algorithm for composite optimization with probabilistic inexact oracles and probabilistic inexactness in the proximal mapping, which may be of independent interest as well.

  10. Aksenov A.A., Zhluktov S.V., Kalugina M.D., Kashirin V.S., Lobanov A.I., Shaurman D.V.
    Reduced mathematical model of blood coagulation taking into account thrombin activity switching as a basis for estimation of hemodynamic effects and its implementation in FlowVision package
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1039-1067

    The possibility of numerical 3D simulation of thrombi formation is considered.

    The developed up to now detailed mathematical models describing formation of thrombi and clots include a great number of equations. Being implemented in a CFD code, the detailed mathematical models require essential computer resources for simulation of the thrombi growth in a blood flow. A reasonable alternative way is using reduced mathematical models. Two models based on the reduced mathematical model for the thrombin generation are described in the given paper.

    The first model describes growth of a thrombus in a great vessel (artery). The artery flows are essentially unsteady. They are characterized by pulse waves. The blood velocity here is high compared to that in the vein tree. The reduced model for the thrombin generation and the thrombus growth in an artery is relatively simple. The processes accompanying the thrombin generation in arteries are well described by the zero-order approximation.

    A vein flow is characterized lower velocity value, lower gradients, and lower shear stresses. In order to simulate the thrombin generation in veins, a more complex system of equations has to be solved. The model must allow for all the non-linear terms in the right-hand sides of the equations.

    The simulation is carried out in the industrial software FlowVision.

    The performed numerical investigations have shown the suitability of the reduced models for simulation of thrombin generation and thrombus growth. The calculations demonstrate formation of the recirculation zone behind a thrombus. The concentration of thrombin and the mass fraction of activated platelets are maximum here. Formation of such a zone causes slow growth of the thrombus downstream. At the upwind part of the thrombus, the concentration of activated platelets is low, and the upstream thrombus growth is negligible.

    When the blood flow variation during a hart cycle is taken into account, the thrombus growth proceeds substantially slower compared to the results obtained under the assumption of constant (averaged over a hard cycle) conditions. Thrombin and activated platelets produced during diastole are quickly carried away by the blood flow during systole. Account of non-Newtonian rheology of blood noticeably affects the results.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"