Результаты поиска по 'methods':
Найдено статей: 617
  1. Reshitko M.A., Ougolnitsky G.A., Usov A.B.
    Numerical method for finding Nash and Shtakelberg equilibria in river water quality control models
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 653-667

    In this paper we consider mathematical model to control water quality. We study a system with two-level hierarchy: one environmental organization (supervisor) at the top level and a few industrial enterprises (agents) at the lower level. The main goal of the supervisor is to keep water pollution level below certain value, while enterprises pollute water, as a side effect of the manufacturing process. Supervisor achieves its goal by charging a penalty for enterprises. On the other hand, enterprises choose how much to purify their wastewater to maximize their income.The fee increases the budget of the supervisor. Moreover, effulent fees are charged for the quantity and/or quality of the discharged pollution. Unfortunately, in practice, such charges are ineffective due to the insufficient tax size. The article solves the problem of determining the optimal size of the charge for pollution discharge, which allows maintaining the quality of river water in the rear range.

    We describe system members goals with target functionals, and describe water pollution level and enterprises state as system of ordinary differential equations. We consider the problem from both supervisor and enterprises sides. From agents’ point a normal-form game arises, where we search for Nash equilibrium and for the supervisor, we search for Stackelberg equilibrium. We propose numerical algorithms for finding both Nash and Stackelberg equilibrium. When we construct Nash equilibrium, we solve optimal control problem using Pontryagin’s maximum principle. We construct Hamilton’s function and solve corresponding system of partial differential equations with shooting method and finite difference method. Numerical calculations show that the low penalty for enterprises results in increasing pollution level, when relatively high penalty can result in enterprises bankruptcy. This leads to the problem of choosing optimal penalty, which requires considering problem from the supervisor point. In that case we use the method of qualitatively representative scenarios for supervisor and Pontryagin’s maximum principle for agents to find optimal control for the system. At last, we compute system consistency ratio and test algorithms for different data. The results show that a hierarchical control is required to provide system stability.

  2. Karpaev A.A., Aliev R.R.
    Application of simplified implicit Euler method for electrophysiological models
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 845-864

    A simplified implicit Euler method was analyzed as an alternative to the explicit Euler method, which is a commonly used method in numerical modeling in electrophysiology. The majority of electrophysiological models are quite stiff, since the dynamics they describe includes a wide spectrum of time scales: a fast depolarization, that lasts milliseconds, precedes a considerably slow repolarization, with both being the fractions of the action potential observed in excitable cells. In this work we estimate stiffness by a formula that does not require calculation of eigenvalues of the Jacobian matrix of the studied ODEs. The efficiency of the numerical methods was compared on the case of typical representatives of detailed and conceptual type models of excitable cells: Hodgkin–Huxley model of a neuron and Aliev–Panfilov model of a cardiomyocyte. The comparison of the efficiency of the numerical methods was carried out via norms that were widely used in biomedical applications. The stiffness ratio’s impact on the speedup of simplified implicit method was studied: a real gain in speed was obtained for the Hodgkin–Huxley model. The benefits of the usage of simple and high-order methods for electrophysiological models are discussed along with the discussion of one method’s stability issues. The reasons for using simplified instead of high-order methods during practical simulations were discussed in the corresponding section. We calculated higher order derivatives of the solutions of Hodgkin-Huxley model with various stiffness ratios; their maximum absolute values appeared to be quite large. A numerical method’s approximation constant’s formula contains the latter and hence ruins the effect of the other term (a small factor which depends on the order of approximation). This leads to the large value of global error. We committed a qualitative stability analysis of the explicit Euler method and were able to estimate the model’s parameters influence on the border of the region of absolute stability. The latter is used when setting the value of the timestep for simulations a priori.

  3. Volokhova A.V., Zemlyanay E.V., Kachalov V.V., Rikhvitskiy V.S.
    Simulation of the gas condensate reservoir depletion
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1081-1095

    One of problems in developing the gas condensate fields lies on the fact that the condensed hydrocarbons in the gas-bearing layer can get stuck in the pores of the formation and hence cannot be extracted. In this regard, research is underway to increase the recoverability of hydrocarbons in such fields. This research includes a wide range of studies on mathematical simulations of the passage of gas condensate mixtures through a porous medium under various conditions.

    In the present work, within the classical approach based on the Darcy law and the law of continuity of flows, we formulate an initial-boundary value problem for a system of nonlinear differential equations that describes a depletion of a multicomponent gas-condensate mixture in porous reservoir. A computational scheme is developed on the basis of the finite-difference approximation and the fourth order Runge .Kutta method. The scheme can be used for simulations both in the spatially one-dimensional case, corresponding to the conditions of the laboratory experiment, and in the two-dimensional case, when it comes to modeling a flat gas-bearing formation with circular symmetry.

    The computer implementation is based on the combination of C++ and Maple tools, using the MPI parallel programming technique to speed up the calculations. The calculations were performed on the HybriLIT cluster of the Multifunctional Information and Computing Complex of the Laboratory of Information Technologies of the Joint Institute for Nuclear Research.

    Numerical results are compared with the experimental data on the pressure dependence of output of a ninecomponent hydrocarbon mixture obtained at a laboratory facility (VNIIGAZ, Ukhta). The calculations were performed for two types of porous filler in the laboratory model of the formation: terrigenous filler at 25 .„R and carbonate one at 60 .„R. It is shown that the approach developed ensures an agreement of the numerical results with experimental data. By fitting of numerical results to experimental data on the depletion of the laboratory reservoir, we obtained the values of the parameters that determine the inter-phase transition coefficient for the simulated system. Using the same parameters, a computer simulation of the depletion of a thin gas-bearing layer in the circular symmetry approximation was carried out.

  4. Shepelev V.D., Kostyuchenkov N.V., Shepelev S.D., Alieva A.A., Makarova I.V., Buyvol P.A., Parsin G.A.
    The development of an intelligent system for recognizing the volume and weight characteristics of cargo
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 437-450

    Industrial imaging or “machine vision” is currently a key technology in many industries as it can be used to optimize various processes. The purpose of this work is to create a software and hardware complex for measuring the overall and weight characteristics of cargo based on an intelligent system using neural network identification methods that allow one to overcome the technological limitations of similar complexes implemented on ultrasonic and infrared measuring sensors. The complex to be developed will measure cargo without restrictions on the volume and weight characteristics of cargo to be tariffed and sorted within the framework of the warehouse complexes. The system will include an intelligent computer program that determines the volume and weight characteristics of cargo using the machine vision technology and an experimental sample of the stand for measuring the volume and weight of cargo.

    We analyzed the solutions to similar problems. We noted that the disadvantages of the studied methods are very high requirements for the location of the camera, as well as the need for manual operations when calculating the dimensions, which cannot be automated without significant modifications. In the course of the work, we investigated various methods of object recognition in images to carry out subject filtering by the presence of cargo and measure its overall dimensions. We obtained satisfactory results when using cameras that combine both an optical method of image capture and infrared sensors. As a result of the work, we developed a computer program allowing one to capture a continuous stream from Intel RealSense video cameras with subsequent extraction of a three-dimensional object from the designated area and to calculate the overall dimensions of the object. At this stage, we analyzed computer vision techniques; developed an algorithm to implement the task of automatic measurement of goods using special cameras and the software allowing one to obtain the overall dimensions of objects in automatic mode.

    Upon completion of the work, this development can be used as a ready-made solution for transport companies, logistics centers, warehouses of large industrial and commercial enterprises.

  5. Kudrov A.I., Sheremet M.A.
    Numerical simulation of corium cooling driven by natural convection in case of in-vessel retention and time-dependent heat generation
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 807-822

    Represented study considers numerical simulation of corium cooling driven by natural convection within a horizontal hemicylindrical cavity, boundaries of which are assumed isothermal. Corium is a melt of ceramic fuel of a nuclear reactor and oxides of construction materials.

    Corium cooling is a process occurring during severe accident associated with core melt. According to invessel retention conception, the accident may be restrained and localized, if the corium is contained within the vessel, only if it is cooled externally. This conception has a clear advantage over the melt trap, it can be implemented at already operating nuclear power plants. Thereby proper numerical analysis of the corium cooling has become such a relevant area of studies.

    In the research, we assume the corium is contained within a horizontal semitube. The corium initially has temperature of the walls. In spite of reactor shutdown, the corium still generates heat owing to radioactive decays, and the amount of heat released decreases with time accordingly to Way–Wigner formula. The system of equations in Boussinesq approximation including momentum equation, continuity equation and energy equation, describes the natural convection within the cavity. Convective flows are taken to be laminar and two-dimensional.

    The boundary-value problem of mathematical physics is formulated using the non-dimensional nonprimitive variables «stream function – vorticity». The obtained differential equations are solved numerically using the finite difference method and locally one-dimensional Samarskii scheme for the equations of parabolic type.

    As a result of the present research, we have obtained the time behavior of mean Nusselt number at top and bottom walls for Rayleigh number ranged from 103 to 106. These mentioned dependences have been analyzed for various dimensionless operation periods before the accident. Investigations have been performed using streamlines and isotherms as well as time dependences for convective flow and heat transfer rates.

  6. Safiullina L.F., Gubaydullin I.M.
    Analysis of the identifiability of the mathematical model of propane pyrolysis
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1045-1057

    The article presents the numerical modeling and study of the kinetic model of propane pyrolysis. The study of the reaction kinetics is a necessary stage in modeling the dynamics of the gas flow in the reactor.

    The kinetic model of propane pyrolysis is a nonlinear system of ordinary differential equations of the first order with parameters, the role of which is played by the reaction rate constants. Math modeling of processes is based on the use of the mass conservation law. To solve an initial (forward) problem, implicit methods for solving stiff ordinary differential equation systems are used. The model contains 60 input kinetic parameters and 17 output parameters corresponding to the reaction substances, of which only 9 are observable. In the process of solving the problem of estimating parameters (inverse problem), there is a question of non-uniqueness of the set of parameters that satisfy the experimental data. Therefore, before solving the inverse problem, the possibility of determining the parameters of the model is analyzed (analysis of identifiability).

    To analyze identifiability, we use the orthogonal method, which has proven itself well for analyzing models with a large number of parameters. The algorithm is based on the analysis of the sensitivity matrix by the methods of differential and linear algebra, which shows the degree of dependence of the unknown parameters of the models on the given measurements. The analysis of sensitivity and identifiability showed that the parameters of the model are stably determined from a given set of experimental data. The article presents a list of model parameters from most to least identifiable. Taking into account the analysis of the identifiability of the mathematical model, restrictions were introduced on the search for less identifiable parameters when solving the inverse problem.

    The inverse problem of estimating the parameters was solved using a genetic algorithm. The article presents the found optimal values of the kinetic parameters. A comparison of the experimental and calculated dependences of the concentrations of propane, main and by-products of the reaction on temperature for different flow rates of the mixture is presented. The conclusion about the adequacy of the constructed mathematical model is made on the basis of the correspondence of the results obtained to physicochemical laws and experimental data.

  7. Ostroukhov P.A.
    Tensor methods inside mixed oracle for min-min problems
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 377-398

    In this article we consider min-min type of problems or minimization by two groups of variables. In some way it is similar to classic min-max saddle point problem. Although, saddle point problems are usually more difficult in some way. Min-min problems may occur in case if some groups of variables in convex optimization have different dimensions or if these groups have different domains. Such problem structure gives us an ability to split the main task to subproblems, and allows to tackle it with mixed oracles. However existing articles on this topic cover only zeroth and first order oracles, in our work we consider high-order tensor methods to solve inner problem and fast gradient method to solve outer problem.

    We assume, that outer problem is constrained to some convex compact set, and for the inner problem we consider both unconstrained case and being constrained to some convex compact set. By definition, tensor methods use high-order derivatives, so the time per single iteration of the method depends a lot on the dimensionality of the problem it solves. Therefore, we suggest, that the dimension of the inner problem variable is not greater than 1000. Additionally, we need some specific assumptions to be able to use mixed oracles. Firstly, we assume, that the objective is convex in both groups of variables and its gradient by both variables is Lipschitz continuous. Secondly, we assume the inner problem is strongly convex and its gradient is Lipschitz continuous. Also, since we are going to use tensor methods for inner problem, we need it to be p-th order Lipschitz continuous ($p > 1$). Finally, we assume strong convexity of the outer problem to be able to use fast gradient method for strongly convex functions.

    We need to emphasize, that we use superfast tensor method to tackle inner subproblem in unconstrained case. And when we solve inner problem on compact set, we use accelerated high-order composite proximal method.

    Additionally, in the end of the article we compare the theoretical complexity of obtained methods with regular gradient method, which solves the mentioned problem as regular convex optimization problem and doesn’t take into account its structure (Remarks 1 and 2).

  8. Suzdaltsev V.A., Suzdaltsev I.V., Tarhavova E.G.
    Fuzzy knowledge extraction in the development of expert predictive diagnostic systems
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1395-1408

    Expert systems imitate professional experience and thinking process of a specialist to solve problems in various subject areas. An example of the problem that it is expedient to solve with the help of the expert system is the problem of forming a diagnosis that arises in technology, medicine, and other fields. When solving the diagnostic problem, it is necessary to anticipate the occurrence of critical or emergency situations in the future. They are situations, which require timely intervention of specialists to prevent critical aftermath. Fuzzy sets theory provides one of the approaches to solve ill-structured problems, diagnosis-making problems belong to which. The theory of fuzzy sets provides means for the formation of linguistic variables, which are helpful to describe the modeled process. Linguistic variables are elements of fuzzy logical rules that simulate the reasoning of professionals in the subject area. To develop fuzzy rules it is necessary to resort to a survey of experts. Knowledge engineers use experts’ opinion to evaluate correspondence between a typical current situation and the risk of emergency in the future. The result of knowledge extraction is a description of linguistic variables that includes a combination of signs. Experts are involved in the survey to create descriptions of linguistic variables and present a set of simulated situations.When building such systems, the main problem of the survey is laboriousness of the process of interaction of knowledge engineers with experts. The main reason is the multiplicity of questions the expert must answer. The paper represents reasoning of the method, which allows knowledge engineer to reduce the number of questions posed to the expert. The paper describes the experiments carried out to test the applicability of the proposed method. An expert system for predicting risk groups for neonatal pathologies and pregnancy pathologies using the proposed knowledge extraction method confirms the feasibility of the proposed approach.

  9. Makarov I.S., Bagantsova E.R., Iashin P.A., Kovaleva M.D., Zakharova E.M.
    Development of and research into a rigid algorithm for analyzing Twitter publications and its influence on the movements of the cryptocurrency market
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 157-170

    Social media is a crucial indicator of the position of assets in the financial market. The paper describes the rigid solution for the classification problem to determine the influence of social media activity on financial market movements. Reputable crypto traders influencers are selected. Twitter posts packages are used as data. The methods of text, which are characterized by the numerous use of slang words and abbreviations, and preprocessing consist in lemmatization of Stanza and the use of regular expressions. A word is considered as an element of a vector of a data unit in the course of solving the problem of binary classification. The best markup parameters for processing Binance candles are searched for. Methods of feature selection, which is necessary for a precise description of text data and the subsequent process of establishing dependence, are represented by machine learning and statistical analysis. First, the feature selection is used based on the information criterion. This approach is implemented in a random forest model and is relevant for the task of feature selection for splitting nodes in a decision tree. The second one is based on the rigid compilation of a binary vector during a rough check of the presence or absence of a word in the package and counting the sum of the elements of this vector. Then a decision is made depending on the superiority of this sum over the threshold value that is predetermined previously by analyzing the frequency distribution of mentions of the word. The algorithm used to solve the problem was named benchmark and analyzed as a tool. Similar algorithms are often used in automated trading strategies. In the course of the study, observations of the influence of frequently occurring words, which are used as a basis of dimension 2 and 3 in vectorization, are described as well.

  10. Stonyakin F.S., Ablaev S.S., Baran I.V., Alkousa M.S.
    Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 393-412

    The work is devoted to the study of subgradient methods with different variations of the Polyak stepsize for minimization functions from the class of weakly convex and relatively weakly convex functions that have the corresponding analogue of a sharp minimum. It turns out that, under certain assumptions about the starting point, such an approach can make it possible to justify the convergence of the subgradient method with the speed of a geometric progression. For the subgradient method with the Polyak stepsize, a refined estimate for the rate of convergence is proved for minimization problems for weakly convex functions with a sharp minimum. The feature of this estimate is an additional consideration of the decrease of the distance from the current point of the method to the set of solutions with the increase in the number of iterations. The results of numerical experiments for the phase reconstruction problem (which is weakly convex and has a sharp minimum) are presented, demonstrating the effectiveness of the proposed approach to estimating the rate of convergence compared to the known one. Next, we propose a variation of the subgradient method with switching over productive and non-productive steps for weakly convex problems with inequality constraints and obtain the corresponding analog of the result on convergence with the rate of geometric progression. For the subgradient method with the corresponding variation of the Polyak stepsize on the class of relatively Lipschitz and relatively weakly convex functions with a relative analogue of a sharp minimum, it was obtained conditions that guarantee the convergence of such a subgradient method at the rate of a geometric progression. Finally, a theoretical result is obtained that describes the influence of the error of the information about the (sub)gradient available by the subgradient method and the objective function on the estimation of the quality of the obtained approximate solution. It is proved that for a sufficiently small error $\delta > 0$, one can guarantee that the accuracy of the solution is comparable to $\delta$.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"