Результаты поиска по 'optimality':
Найдено статей: 222
  1. Radjuk A.G., Titlianov A.E., Skripalenko M.M.
    Computer simulation of temperature field of blast furnace’s air tuyere
    Computer Research and Modeling, 2017, v. 9, no. 1, pp. 117-125

    Study of work of heating equipment is an actual issue because it allows determining optimal regimes to reach highest efficiency. At that it is very helpful to use computer simulation to predict how different heating modes influence the effectiveness of the heating process and wear of heating equipment. Computer simulation provides results whose accuracy is proven by many studies and requires costs and time less than real experiments. In terms of present research, computer simulation of heating of air tuyere of blast furnace was realized with the help of FEM software. Background studies revealed possibility to simulate it as a flat, axisymmetric problem and DEFORM-2D software was used for simulation. Geometry, necessary for simulation, was designed with the help of SolidWorks, saved in .dxf format. Then it was exported to DEFORM-2D pre-processor and positioned. Preliminary and boundary conditions were set up. Several modes of operating regimes were under analysis. In order to demonstrate influence of eah of the modes and for better visualization point tracking option of the DEFORM-2D post-processor was applied. Influence of thermal insulation box plugged into blow channel, with and without air gap, and thermal coating on air tuyere’s temperature field was investigated. Simulation data demonstrated significant effect of thermal insulation box on air tuyere’s temperature field. Designed model allowed to simulate tuyere’s burnout as a result of interaction with liquid iron. Conducted researches have demonstrated DEFORM-2D effectiveness while using it for simulation of heat transfer and heating processes. DEFORM-2D is about to be used in further studies dedicated to more complex process connected with temperature field of blast furnace’s air tuyere.

    Views (last year): 7.
  2. Silaeva V.A., Silaeva M.V., Silaev A.M.
    Estimation of models parameters for time series with Markov switching regimes
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 903-918

    The paper considers the problem of estimating the parameters of time series described by regression models with Markov switching of two regimes at random instants of time with independent Gaussian noise. For the solution, we propose a variant of the EM algorithm based on the iterative procedure, during which an estimation of the regression parameters is performed for a given sequence of regime switching and an evaluation of the switching sequence for the given parameters of the regression models. In contrast to the well-known methods of estimating regression parameters in the models with Markov switching, which are based on the calculation of a posteriori probabilities of discrete states of the switching sequence, in the paper the estimates are calculated of the switching sequence, which are optimal by the criterion of the maximum of a posteriori probability. As a result, the proposed algorithm turns out to be simpler and requires less calculations. Computer modeling allows to reveal the factors influencing accuracy of estimation. Such factors include the number of observations, the number of unknown regression parameters, the degree of their difference in different modes of operation, and the signal-to-noise ratio which is associated with the coefficient of determination in regression models. The proposed algorithm is applied to the problem of estimating parameters in regression models for the rate of daily return of the RTS index, depending on the returns of the S&P 500 index and Gazprom shares for the period from 2013 to 2018. Comparison of the estimates of the parameters found using the proposed algorithm is carried out with the estimates that are formed using the EViews econometric package and with estimates of the ordinary least squares method without taking into account regimes switching. The account of regimes switching allows to receive more exact representation about structure of a statistical dependence of investigated variables. In switching models, the increase in the signal-to-noise ratio leads to the fact that the differences in the estimates produced by the proposed algorithm and using the EViews program are reduced.

    Views (last year): 36.
  3. Zabotin, V.I., Chernyshevskij P.A.
    Extension of Strongin’s Global Optimization Algorithm to a Function Continuous on a Compact Interval
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1111-1119

    The Lipschitz continuous property has been used for a long time to solve the global optimization problem and continues to be used. Here we can mention the work of Piyavskii, Yevtushenko, Strongin, Shubert, Sergeyev, Kvasov and others. Most papers assume a priori knowledge of the Lipschitz constant, but the derivation of this constant is a separate problem. Further still, we must prove that an objective function is really Lipschitz, and it is a complicated problem too. In the case where the Lipschitz continuity is established, Strongin proposed an algorithm for global optimization of a satisfying Lipschitz condition on a compact interval function without any a priori knowledge of the Lipschitz estimate. The algorithm not only finds a global extremum, but it determines the Lipschitz estimate too. It is known that every function that satisfies the Lipchitz condition on a compact convex set is uniformly continuous, but the reverse is not always true. However, there exist models (Arutyunova, Dulliev, Zabotin) whose study requires a minimization of the continuous but definitely not Lipschitz function. One of the algorithms for solving such a problem was proposed by R. J. Vanderbei. In his work he introduced some generalization of the Lipchitz property named $\varepsilon$-Lipchitz and proved that a function defined on a compact convex set is uniformly continuous if and only if it satisfies the $\varepsilon$-Lipchitz condition. The above-mentioned property allowed him to extend Piyavskii’s method. However, Vanderbei assumed that for a given value of $\varepsilon$ it is possible to obtain an associate Lipschitz $\varepsilon$-constant, which is a very difficult problem. Thus, there is a need to construct, for a function continuous on a compact convex domain, a global optimization algorithm which works in some way like Strongin’s algorithm, i.e., without any a priori knowledge of the Lipschitz $\varepsilon$-constant. In this paper we propose an extension of Strongin’s global optimization algorithm to a function continuous on a compact interval using the $\varepsilon$-Lipchitz conception, prove its convergence and solve some numerical examples using the software that implements the developed method.

  4. Reshitko M.A., Ougolnitsky G.A., Usov A.B.
    Numerical method for finding Nash and Shtakelberg equilibria in river water quality control models
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 653-667

    In this paper we consider mathematical model to control water quality. We study a system with two-level hierarchy: one environmental organization (supervisor) at the top level and a few industrial enterprises (agents) at the lower level. The main goal of the supervisor is to keep water pollution level below certain value, while enterprises pollute water, as a side effect of the manufacturing process. Supervisor achieves its goal by charging a penalty for enterprises. On the other hand, enterprises choose how much to purify their wastewater to maximize their income.The fee increases the budget of the supervisor. Moreover, effulent fees are charged for the quantity and/or quality of the discharged pollution. Unfortunately, in practice, such charges are ineffective due to the insufficient tax size. The article solves the problem of determining the optimal size of the charge for pollution discharge, which allows maintaining the quality of river water in the rear range.

    We describe system members goals with target functionals, and describe water pollution level and enterprises state as system of ordinary differential equations. We consider the problem from both supervisor and enterprises sides. From agents’ point a normal-form game arises, where we search for Nash equilibrium and for the supervisor, we search for Stackelberg equilibrium. We propose numerical algorithms for finding both Nash and Stackelberg equilibrium. When we construct Nash equilibrium, we solve optimal control problem using Pontryagin’s maximum principle. We construct Hamilton’s function and solve corresponding system of partial differential equations with shooting method and finite difference method. Numerical calculations show that the low penalty for enterprises results in increasing pollution level, when relatively high penalty can result in enterprises bankruptcy. This leads to the problem of choosing optimal penalty, which requires considering problem from the supervisor point. In that case we use the method of qualitatively representative scenarios for supervisor and Pontryagin’s maximum principle for agents to find optimal control for the system. At last, we compute system consistency ratio and test algorithms for different data. The results show that a hierarchical control is required to provide system stability.

  5. Shepelev V.D., Kostyuchenkov N.V., Shepelev S.D., Alieva A.A., Makarova I.V., Buyvol P.A., Parsin G.A.
    The development of an intelligent system for recognizing the volume and weight characteristics of cargo
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 437-450

    Industrial imaging or “machine vision” is currently a key technology in many industries as it can be used to optimize various processes. The purpose of this work is to create a software and hardware complex for measuring the overall and weight characteristics of cargo based on an intelligent system using neural network identification methods that allow one to overcome the technological limitations of similar complexes implemented on ultrasonic and infrared measuring sensors. The complex to be developed will measure cargo without restrictions on the volume and weight characteristics of cargo to be tariffed and sorted within the framework of the warehouse complexes. The system will include an intelligent computer program that determines the volume and weight characteristics of cargo using the machine vision technology and an experimental sample of the stand for measuring the volume and weight of cargo.

    We analyzed the solutions to similar problems. We noted that the disadvantages of the studied methods are very high requirements for the location of the camera, as well as the need for manual operations when calculating the dimensions, which cannot be automated without significant modifications. In the course of the work, we investigated various methods of object recognition in images to carry out subject filtering by the presence of cargo and measure its overall dimensions. We obtained satisfactory results when using cameras that combine both an optical method of image capture and infrared sensors. As a result of the work, we developed a computer program allowing one to capture a continuous stream from Intel RealSense video cameras with subsequent extraction of a three-dimensional object from the designated area and to calculate the overall dimensions of the object. At this stage, we analyzed computer vision techniques; developed an algorithm to implement the task of automatic measurement of goods using special cameras and the software allowing one to obtain the overall dimensions of objects in automatic mode.

    Upon completion of the work, this development can be used as a ready-made solution for transport companies, logistics centers, warehouses of large industrial and commercial enterprises.

  6. Safiullina L.F., Gubaydullin I.M.
    Analysis of the identifiability of the mathematical model of propane pyrolysis
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1045-1057

    The article presents the numerical modeling and study of the kinetic model of propane pyrolysis. The study of the reaction kinetics is a necessary stage in modeling the dynamics of the gas flow in the reactor.

    The kinetic model of propane pyrolysis is a nonlinear system of ordinary differential equations of the first order with parameters, the role of which is played by the reaction rate constants. Math modeling of processes is based on the use of the mass conservation law. To solve an initial (forward) problem, implicit methods for solving stiff ordinary differential equation systems are used. The model contains 60 input kinetic parameters and 17 output parameters corresponding to the reaction substances, of which only 9 are observable. In the process of solving the problem of estimating parameters (inverse problem), there is a question of non-uniqueness of the set of parameters that satisfy the experimental data. Therefore, before solving the inverse problem, the possibility of determining the parameters of the model is analyzed (analysis of identifiability).

    To analyze identifiability, we use the orthogonal method, which has proven itself well for analyzing models with a large number of parameters. The algorithm is based on the analysis of the sensitivity matrix by the methods of differential and linear algebra, which shows the degree of dependence of the unknown parameters of the models on the given measurements. The analysis of sensitivity and identifiability showed that the parameters of the model are stably determined from a given set of experimental data. The article presents a list of model parameters from most to least identifiable. Taking into account the analysis of the identifiability of the mathematical model, restrictions were introduced on the search for less identifiable parameters when solving the inverse problem.

    The inverse problem of estimating the parameters was solved using a genetic algorithm. The article presents the found optimal values of the kinetic parameters. A comparison of the experimental and calculated dependences of the concentrations of propane, main and by-products of the reaction on temperature for different flow rates of the mixture is presented. The conclusion about the adequacy of the constructed mathematical model is made on the basis of the correspondence of the results obtained to physicochemical laws and experimental data.

  7. Korepanov V.O., Chkhartishvili A.G., Shumov V.V.
    Game-theoretic and reflexive combat models
    Computer Research and Modeling, 2022, v. 14, no. 1, pp. 179-203

    Modeling combat operations is an urgent scientific and practical task aimed at providing commanders and staffs with quantitative grounds for making decisions. The authors proposed the function of victory in combat and military operations, based on the function of the conflict by G. Tullock and taking into account the scale of combat (military) operations. On a sufficient volume of military statistics, the scale parameter was assessed and its values were found for the tactical, operational and strategic levels. The game-theoretic models «offensive – defense», in which the sides solve the immediate and subsequent tasks, having the formation of troops in one or several echelons, have been investigated. At the first stage of modeling, the solution of the immediate task is found — the breakthrough (holding) of defense points, at the second — the solution of the subsequent task — the defeat of the enemy in the depth of the defense (counterattack and restoration of defense). For the tactical level, using the Nash equilibrium, solutions were found for the closest problem (distribution of the forces of the sides by points of defense) in an antagonistic game according to three criteria: a) breakthrough of the weakest point, b) breakthrough of at least one point, and c) weighted average probability. It is shown that it is advisable for the attacking side to use the criterion of «breaking through at least one point», in which, all other things being equal, the maximum probability of breaking through the points of defense is ensured. At the second stage of modeling for a particular case (the sides are guided by the criterion of breaking through the weakest point when breaking through and holding defense points), the problem of distributing forces and facilities between tactical tasks (echelons) was solved according to two criteria: a) maximizing the probability of breaking through the defense point and the probability of defeating the enemy in depth defense, b) maximizing the minimum value of the named probabilities (the criterion of the guaranteed result). Awareness is an important aspect of combat operations. Several examples of reflexive games (games characterized by complex mutual awareness) and information management are considered. It is shown under what conditions information control increases the player’s payoff, and the optimal information control is found.

  8. Ostroukhov P.A.
    Tensor methods inside mixed oracle for min-min problems
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 377-398

    In this article we consider min-min type of problems or minimization by two groups of variables. In some way it is similar to classic min-max saddle point problem. Although, saddle point problems are usually more difficult in some way. Min-min problems may occur in case if some groups of variables in convex optimization have different dimensions or if these groups have different domains. Such problem structure gives us an ability to split the main task to subproblems, and allows to tackle it with mixed oracles. However existing articles on this topic cover only zeroth and first order oracles, in our work we consider high-order tensor methods to solve inner problem and fast gradient method to solve outer problem.

    We assume, that outer problem is constrained to some convex compact set, and for the inner problem we consider both unconstrained case and being constrained to some convex compact set. By definition, tensor methods use high-order derivatives, so the time per single iteration of the method depends a lot on the dimensionality of the problem it solves. Therefore, we suggest, that the dimension of the inner problem variable is not greater than 1000. Additionally, we need some specific assumptions to be able to use mixed oracles. Firstly, we assume, that the objective is convex in both groups of variables and its gradient by both variables is Lipschitz continuous. Secondly, we assume the inner problem is strongly convex and its gradient is Lipschitz continuous. Also, since we are going to use tensor methods for inner problem, we need it to be p-th order Lipschitz continuous ($p > 1$). Finally, we assume strong convexity of the outer problem to be able to use fast gradient method for strongly convex functions.

    We need to emphasize, that we use superfast tensor method to tackle inner subproblem in unconstrained case. And when we solve inner problem on compact set, we use accelerated high-order composite proximal method.

    Additionally, in the end of the article we compare the theoretical complexity of obtained methods with regular gradient method, which solves the mentioned problem as regular convex optimization problem and doesn’t take into account its structure (Remarks 1 and 2).

  9. Bernadotte A., Mazurin A.D.
    Optimization of the brain command dictionary based on the statistical proximity criterion in silent speech recognition task
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 675-690

    In our research, we focus on the problem of classification for silent speech recognition to develop a brain– computer interface (BCI) based on electroencephalographic (EEG) data, which will be capable of assisting people with mental and physical disabilities and expanding human capabilities in everyday life. Our previous research has shown that the silent pronouncing of some words results in almost identical distributions of electroencephalographic signal data. Such a phenomenon has a suppressive impact on the quality of neural network model behavior. This paper proposes a data processing technique that distinguishes between statistically remote and inseparable classes in the dataset. Applying the proposed approach helps us reach the goal of maximizing the semantic load of the dictionary used in BCI.

    Furthermore, we propose the existence of a statistical predictive criterion for the accuracy of binary classification of the words in a dictionary. Such a criterion aims to estimate the lower and the upper bounds of classifiers’ behavior only by measuring quantitative statistical properties of the data (in particular, using the Kolmogorov – Smirnov method). We show that higher levels of classification accuracy can be achieved by means of applying the proposed predictive criterion, making it possible to form an optimized dictionary in terms of semantic load for the EEG-based BCIs. Furthermore, using such a dictionary as a training dataset for classification problems grants the statistical remoteness of the classes by taking into account the semantic and phonetic properties of the corresponding words and improves the classification behavior of silent speech recognition models.

  10. Sofronova E.A., Diveev A.I., Kazaryan D.E., Konstantinov S.V., Daryina A.N., Seliverstov Y.A., Baskin L.A.
    Utilizing multi-source real data for traffic flow optimization in CTraf
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 147-159

    The problem of optimal control of traffic flow in an urban road network is considered. The control is carried out by varying the duration of the working phases of traffic lights at controlled intersections. A description of the control system developed is given. The control system enables the use of three types of control: open-loop, feedback and manual. In feedback control, road infrastructure detectors, video cameras, inductive loop and radar detectors are used to determine the quantitative characteristics of current traffic flow state. The quantitative characteristics of the traffic flows are fed into a mathematical model of the traffic flow, implemented in the computer environment of an automatic traffic flow control system, in order to determine the moments for switching the working phases of the traffic lights. The model is a system of finite-difference recurrent equations and describes the change in traffic flow on each road section at each time step, based on retrived data on traffic flow characteristics in the network, capacity of maneuvers and flow distribution through alternative maneuvers at intersections. The model has scaling and aggregation properties. The structure of the model depends on the structure of the graph of the controlled road network. The number of nodes in the graph is equal to the number of road sections in the considered network. The simulation of traffic flow changes in real time makes it possible to optimally determine the duration of traffic light operating phases and to provide traffic flow control with feedback based on its current state. The system of automatic collection and processing of input data for the model is presented. In order to model the states of traffic flow in the network and to solve the problem of optimal traffic flow control, the CTraf software package has been developed, a brief description of which is given in the paper. An example of the solution of the optimal control problem of traffic flows on the basis of real data in the road network of Moscow is given.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"