Результаты поиска по 'solution method':
Найдено статей: 290
  1. Pletnev N.V., Matyukhin V.V.
    On the modification of the method of component descent for solving some inverse problems of mathematical physics
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 301-316

    The article is devoted to solving ill-posed problems of mathematical physics for elliptic and parabolic equations, such as the Cauchy problem for the Helmholtz equation and the retrospective Cauchy problem for the heat equation with constant coefficients. These problems are reduced to problems of convex optimization in Hilbert space. The gradients of the corresponding functionals are calculated approximately by solving two well-posed problems. A new method is proposed for solving the optimization problems under study, it is component-by-component descent in the basis of eigenfunctions of a self-adjoint operator associated with the problem. If it was possible to calculate the gradient exactly, this method would give an arbitrarily exact solution of the problem, depending on the number of considered elements of the basis. In real cases, the inaccuracy of calculations leads to a violation of monotonicity, which requires the use of restarts and limits the achievable quality. The paper presents the results of experiments confirming the effectiveness of the constructed method. It is determined that the new approach is superior to approaches based on the use of gradient optimization methods: it allows to achieve better quality of solution with significantly less computational resources. It is assumed that the constructed method can be generalized to other problems.

  2. Voloshin A.S., Konyukhov A.V., Pankratov L.S.
    Homogenized model of two-phase capillary-nonequilibrium flows in a medium with double porosity
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 567-580

    A mathematical model of two-phase capillary-nonequilibrium isothermal flows of incompressible phases in a double porosity medium is constructed. A double porosity medium is considered, which is a composition of two porous media with contrasting capillary properties (absolute permeability, capillary pressure). One of the constituent media has high permeability and is conductive, the second is characterized by low permeability and forms an disconnected system of matrix blocks. A feature of the model is to take into account the influence of capillary nonequilibrium on mass transfer between subsystems of double porosity, while the nonequilibrium properties of two-phase flow in the constituent media are described in a linear approximation within the Hassanizadeh model. Homogenization by the method of formal asymptotic expansions leads to a system of partial differential equations, the coefficients of which depend on internal variables determined from the solution of cell problems. Numerical solution of cell problems for a system of partial differential equations is computationally expensive. Therefore, a thermodynamically consistent kinetic equation is formulated for the internal parameter characterizing the phase distribution between the subsystems of double porosity. Dynamic relative phase permeability and capillary pressure in the processes of drainage and impregnation are constructed. It is shown that the capillary nonequilibrium of flows in the constituent subsystems has a strong influence on them. Thus, the analysis and modeling of this factor is important in transfer problems in systems with double porosity.

  3. Grachev V.A., Nayshtut Yu.S.
    Buckling prediction for shallow convex shells based on the analysis of nonlinear oscillations
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1189-1205

    Buckling problems of thin elastic shells have become relevant again because of the discrepancies between the standards in many countries on how to estimate loads causing buckling of shallow shells and the results of the experiments on thinwalled aviation structures made of high-strength alloys. The main contradiction is as follows: the ultimate internal stresses at shell buckling (collapsing) turn out to be lower than the ones predicted by the adopted design theory used in the USA and European standards. The current regulations are based on the static theory of shallow shells that was put forward in the 1930s: within the nonlinear theory of elasticity for thin-walled structures there are stable solutions that significantly differ from the forms of equilibrium typical to small initial loads. The minimum load (the lowest critical load) when there is an alternative form of equilibrium was used as a maximum permissible one. In the 1970s it was recognized that this approach is unacceptable for complex loadings. Such cases were not practically relevant in the past while now they occur with thinner structures used under complex conditions. Therefore, the initial theory on bearing capacity assessments needs to be revised. The recent mathematical results that proved asymptotic proximity of the estimates based on two analyses (the three-dimensional dynamic theory of elasticity and the dynamic theory of shallow convex shells) could be used as a theory basis. This paper starts with the setting of the dynamic theory of shallow shells that comes down to one resolving integrodifferential equation (once the special Green function is constructed). It is shown that the obtained nonlinear equation allows for separation of variables and has numerous time-period solutions that meet the Duffing equation with “a soft spring”. This equation has been thoroughly studied; its numerical analysis enables finding an amplitude and an oscillation period depending on the properties of the Green function. If the shell is oscillated with the trial time-harmonic load, the movement of the surface points could be measured at the maximum amplitude. The study proposes an experimental set-up where resonance oscillations are generated with the trial load normal to the surface. The experimental measurements of the shell movements, the amplitude and the oscillation period make it possible to estimate the safety factor of the structure bearing capacity with non-destructive methods under operating conditions.

  4. Kazorin V.I., Kholodov Y.A.
    Framework sumo-atclib for adaptive traffic control modeling
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 69-78

    This article proposes the sumo-atclib framework, which provides a convenient uniform interface for testing adaptive control algorithms with different limitations, for example, restrictions on phase durations, phase sequences, restrictions on the minimum time between control actions, which uses the open source microscopic transport modeling environment SUMO. The framework shares the functionality of controllers (class TrafficController) and a monitoring and detection system (class StateObserver), which repeats the architecture of real traffic light objects and adaptive control systems and simplifies the testing of new algorithms, since combinations of different controllers and vehicle detection systems can be freely varied. Also, unlike most existing solutions, the road class Road has been added, which combines a set of lanes, this allows, for example, to determine the adjacency of regulated intersections, in cases when the number of lanes changes on the way from one intersection to another, and therefore the road graph is divided into several edges. At the same time, the algorithms themselves use the same interface and are abstracted from the specific parameters of the detectors, network topologies, that is, it is assumed that this solution will allow the transport engineer to test ready-made algorithms for a new scenario, without the need to adapt them to new conditions, which speeds up the development process of the control system, and reduces design overhead. At the moment, the package contains examples of MaxPressure algorithms and the Q-learning reinforcement learning method, the database of examples is also being updated. The framework also includes a set of SUMO scripts for testing algorithms, which includes both synthetic maps and well-verified SUMO scripts such as Cologne and Ingolstadt. In addition, the framework provides a set of automatically calculated metrics, such as total travel time, delay time, average speed; the framework also provides a ready-made example for visualization of metrics.

  5. Doludenko A.N., Kulikov Y.M., Saveliev A.S.
    Сhaotic flow evolution arising in a body force field
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 883-912

    This article presents the results of an analytical and computer study of the chaotic evolution of a regular velocity field generated by a large-scale harmonic forcing. The authors obtained an analytical solution for the flow stream function and its derivative quantities (velocity, vorticity, kinetic energy, enstrophy and palinstrophy). Numerical modeling of the flow evolution was carried out using the OpenFOAM software package based on incompressible model, as well as two inhouse implementations of CABARET and McCormack methods employing nearly incompressible formulation. Calculations were carried out on a sequence of nested meshes with 642, 1282, 2562, 5122, 10242 cells for two characteristic (asymptotic) Reynolds numbers characterizing laminar and turbulent evolution of the flow, respectively. Simulations show that blow-up of the analytical solution takes place in both cases. The energy characteristics of the flow are discussed relying upon the energy curves as well as the dissipation rates. For the fine mesh, this quantity turns out to be several orders of magnitude less than its hydrodynamic (viscous) counterpart. Destruction of the regular flow structure is observed for any of the numerical methods, including at the late stages of laminar evolution, when numerically obtained distributions are close to analytics. It can be assumed that the prerequisite for the development of instability is the error accumulated during the calculation process. This error leads to unevenness in the distribution of vorticity and, as a consequence, to the variance vortex intensity and finally leads to chaotization of the flow. To study the processes of vorticity production, we used two integral vorticity-based quantities — integral enstrophy ($\zeta$) and palinstrophy $(P)$. The formulation of the problem with periodic boundary conditions allows us to establish a simple connection between these quantities. In addition, $\zeta$ can act as a measure of the eddy resolution of the numerical method, and palinstrophy determines the degree of production of small-scale vorticity.

  6. Nazarov F.K.
    Numerical study of high-speed mixing layers based on a two-fluid turbulence model
    Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1125-1142

    This work is devoted to the numerical study of high-speed mixing layers of compressible flows. The problem under consideration has a wide range of applications in practical tasks and, despite its apparent simplicity, is quite complex in terms of modeling. Because in the mixing layer, as a result of the instability of the tangential discontinuity of velocities, the flow passes from laminar flow to turbulent mode. Therefore, the obtained numerical results of the considered problem strongly depend on the adequacy of the used turbulence models. In the presented work, this problem is studied based on the two-fluid approach to the problem of turbulence. This approach has arisen relatively recently and is developing quite rapidly. The main advantage of the two-fluid approach is that it leads to a closed system of equations, when, as is known, the long-standing Reynolds approach leads to an open system of equations. The paper presents the essence of the two-fluid approach for modeling a turbulent compressible medium and the methodology for numerical implementation of the proposed model. To obtain a stationary solution, the relaxation method and Prandtl boundary layer theory were applied, resulting in a simplified system of equations. In the considered problem, high-speed flows are mixed. Therefore, it is also necessary to model heat transfer, and the pressure cannot be considered constant, as is done for incompressible flows. In the numerical implementation, the convective terms in the hydrodynamic equations were approximated by the upwind scheme with the second order of accuracy in explicit form, and the diffusion terms in the right-hand sides of the equations were approximated by the central difference in implicit form. The sweep method was used to implement the obtained equations. The SIMPLE method was used to correct the velocity through the pressure. The paper investigates a two-liquid turbulence model with different initial flow turbulence intensities. The obtained numerical results showed that good agreement with the known experimental data is observed at the inlet turbulence intensity of $0.1 < I < 1 \%$. Data from known experiments, as well as the results of the $k − kL + J$ and LES models, are presented to demonstrate the effectiveness of the proposed turbulence model. It is demonstrated that the two-liquid model is as accurate as known modern models and more efficient in terms of computing resources.

  7. Zhikharev I.M., Tcheremissine F.G., Kloss Y.Y.
    Modeling of gas mixture separation in a multistage micropump based on the solution of the Boltzmann equation
    Computer Research and Modeling, 2024, v. 16, no. 6, pp. 1417-1432

    The paper simulates a mixture of gases in a multi-stage micro-pump and evaluates its effectiveness at separating the components of the mixture. A device in the form of a long channel with a series of transverse plates is considered. A temperature difference between the sides of the plates induces a radiometric gas flow within the device, and the differences in masses of the gases lead to differences in flow velocities and to the separation of the mixture. Modeling is based on the numerical solution of the Boltzmann kinetic equation, for which a splitting scheme is used, i. e., the advection equation and the relaxation problem are solved separately in alternation. The calculation of the collision integral is performed using the conservative projection method. This method ensures the strict fulfillment of the laws of conservation of mass, momentum, and energy, as well as the important asymptotic property of the equality of the integral of the Maxwell function to zero. Explicit first-order and second-order TVD-schemes are used to solve the advection equation. The calculations were performed for a neon-argon mixture using a model of solid spheres with real molecular diameters and masses. Software has been developed to allow calculations on personal computers and cluster systems. The use of parallelization leads to faster computation and constant time per iteration for devices of different sizes, enabling the modeling of large particle systems. It was found that the value of mixture separation, i. e. the ratio of densities at the ends of the device linearly depends on the number of cascades in the device, which makes it possible to estimate separation for multicascade systems, computer modeling of which is impossible. Flows and distributions of gas inside the device during its operation were analyzed. It was demonstrated that devices of this kind with a sufficiently large number of plates are suitable for the separation of gas mixtures, given that they have no moving parts and are quite simple in manufacture and less subject to wear.

  8. Cherepanov V.V.
    Modeling the thermal field of stationary symmetric bodies in rarefied low-temperature plasma
    Computer Research and Modeling, 2025, v. 17, no. 1, pp. 73-91

    The work investigates the process of self-consistent relaxation of the region of disturbances created in a rarefied binary low-temperature plasma by a stationary charged ball or cylinder with an absorbing surface. A feature of such problems is their self-consistent kinetic nature, in which it is impossible to separate the processes of transfer in phase space and the formation of an electromagnetic field. A mathematical model is presented that makes it possible to describe and analyze the state of the gas, electric and thermal fields in the vicinity of the body. The multidimensionality of the kinetic formulation creates certain problems in the numerical solution, therefore a curvilinear system of nonholonomic coordinates was selected for the problem, which minimizes its phase space, which contributes to increasing the efficiency of numerical methods. For such coordinates, the form of the Vlasov kinetic equation has been justified and analyzed. To solve it, a variant of the large particle method with a constant form factor was used. The calculations used a moving grid that tracks the displacement of the distribution function carrier in the phase space, which further reduced the volume of the controlled region of the phase space. Key details of the model and numerical method are revealed. The model and the method are implemented as code in the Matlab language. Using the example of solving a problem for a ball, the presence of significant disequilibrium and anisotropy in the particle velocity distribution in the disturbed zone is shown. Based on the calculation results, pictures of the evolution of the structure of the particle distribution function, profiles of the main macroscopic characteristics of the gas — concentration, current, temperature and heat flow, and characteristics of the electric field in the disturbed region are presented. The mechanism of heating of attracted particles in the disturbed zone is established and some important features of the process of formation of heat flow are shown. The results obtained are well explainable from a physical point of view, which confirms the adequacy of the model and the correct operation of the software tool. The creation and testing of a basis for the development in the future of tools for solving more complex problems of modeling the behavior of ionized gases near charged bodies is noted.

    The work will be useful to specialists in the field of mathematical modeling, heat and mass transfer processes, lowtemperature plasma physics, postgraduate students and senior students specializing in the indicated areas.

  9. Klimenko A.B.
    Mathematical model and heuristic methods of distributed computations organizing in the Internet of Things systems
    Computer Research and Modeling, 2025, v. 17, no. 5, pp. 851-870

    Currently, a significant development has been observed in the direction of distributed computing theory, where computational tasks are solved collectively by resource-constrained devices. In practice, this scenario is implemented when processing data in Internet of Things systems, with the aim of reducing system latency and network infrastructure load, as data is processed on edge network computing devices. However, the rapid growth and widespread adoption of IoT systems raise questions about the need to develop methods for reducing the resource intensity of computations. The resource constraints of computing devices pose the following issues regarding the distribution of computational resources: firstly, the necessity to account for the transit cost between different devices solving various tasks; secondly, the necessity to consider the resource cost associated directly with the process of distributing computational resources, which is particularly relevant for groups of autonomous devices such as drones or robots. An analysis of modern publications available in open access demonstrated the absence of proposed models or methods for distributing computational resources that would simultaneously take into account all these factors, making the creation of a new mathematical model for organizing distributed computing in IoT systems and its solution methods topical. This article proposes a novel mathematical model for distributing computational resources along with heuristic optimization methods, providing an integrated approach to implementing distributed computing in IoT systems. A scenario is considered where there exists a leader device within a group that makes decisions concerning the allocation of computational resources, including its own, for distributed task resolution involving information exchanges. It is also assumed that no prior knowledge exists regarding which device will assume the role of leader or the migration paths of computational tasks across devices. Experimental results have shown the effectiveness of using the proposed models and heuristics: achieving up to a 52% reduction in resource costs for solving computational problems while accounting for data transit costs, saving up to 73% of resources through supplementary criteria optimizing task distribution based on minimizing fragment migrations and distances, and decreasing the resource cost of resolving the computational resource distribution problem by up to 28 times with reductions in distribution quality up to 10%.

  10. Petrov M.O., Ryndin E.A., Andreeva N.V.
    Neuromorphic processor with hardware learning based on a convolutional neural network for audio spectrogram analysis
    Computer Research and Modeling, 2026, v. 18, no. 1, pp. 81-99

    This paper proposes an architectural solution for organizing a convolutional neural network (CNN) oriented towards hardware implementation on edge devices under limited resources. To this goal, an approach to compressing spectrograms to a given size (28 × 28) is proposed using discretization, monoconversion, windowed Fourier transform, and two-dimensional interpolation. A balanced convolution procedure is developed based on compact convolutional filters, the size of which provides the balance between computational complexity and accuracy required for edge devices. An algorithm that enables convolution operations and calculation of the error function gradient in the convolutional layer in a single cycle ensuring increased performance in both inference and training modes of the CNN is proposed. The tradeoff between network trainability and its resistance to overfitting is optimized by applying the Dropout regularization method with a dropout coefficient of 0.5 for the fully connected layer.

    The effectiveness of the proposed solution was demonstrated using the example of recognizing audio spectrograms of car and airplane engine sounds. The CNN was trained on a balanced dataset consisting of 7160 audio recordings. The trained network demonstrated high recognition accuracy (95%), low loss values (< 0.2), and balanced precision/recall/F-metric, demonstrating the effectiveness of the developed CNN model.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"