Результаты поиска по 'optimal method':
Найдено статей: 135
  1. Koltsov Y.V., Boboshko E.V.
    Comparative analysis of optimization methods for electrical energy losses interval evaluation problem
    Computer Research and Modeling, 2013, v. 5, no. 2, pp. 231-239

    This article is dedicated to a comparison analysis of optimization methods, in order to perform an interval estimation of electrical energy technical losses in distribution networks of voltage 6–20 kV. The issue of interval evaluation is represented as a multi-dimensional conditional minimization/maximization problem with implicit target function. A number of numerical optimization methods of first and zero orders is observed, with the aim of determining the most suitable for the problem of interest. The desired algorithm is BOBYQA, in which the target function is replaced with its quadratic approximation in some trusted region.

    Views (last year): 2. Citations: 1 (RSCI).
  2. Parkhomenko P.V.
    Pareto optimal analysis of global warming prevention by geoengineering methods
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1097-1108

    The study is based on a three-dimensional hydrodynamic global climate coupled model, including ocean model with real depths and continents configuration, sea ice evolution model and energy and moisture balance atmosphere model. Aerosol concentration from the year 2010 to 2100 is calculated as a controlling parameter to stabilize mean year surface air temperature. It is shown that by this way it is impossible to achieve the space and seasonal uniform approximation to the existing climate, although it is possible significantly reduce the greenhouse warming effect. Climate will be colder at 0.1–0.2 degrees in the low and mid-latitudes and at high latitudes it will be warmer at 0.2–1.2 degrees. The Pareto frontier is investigated and visualized for two parameters — atmospheric temperature mean square deviation for the winter and summer seasons. The Pareto optimal amount of sulfur emissions would be between 23.5 and 26.5 TgS/year.

    Views (last year): 1. Citations: 3 (RSCI).
  3. Orlova E.V.
    Model for operational optimal control of financial recourses distribution in a company
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 343-358

    A critical analysis of existing approaches, methods and models to solve the problem of financial resources operational management has been carried out in the article. A number of significant shortcomings of the presented models were identified, limiting the scope of their effective usage. There are a static nature of the models, probabilistic nature of financial flows are not taken into account, daily amounts of receivables and payables that significantly affect the solvency and liquidity of the company are not identified. This necessitates the development of a new model that reflects the essential properties of the planning financial flows system — stochasticity, dynamism, non-stationarity.

    The model for the financial flows distribution has been developed. It bases on the principles of optimal dynamic control and provides financial resources planning ensuring an adequate level of liquidity and solvency of a company and concern initial data uncertainty. The algorithm for designing the objective cash balance, based on principles of a companies’ financial stability ensuring under changing financial constraints, is proposed.

    Characteristic of the proposed model is the presentation of the cash distribution process in the form of a discrete dynamic process, for which a plan for financial resources allocation is determined, ensuring the extremum of an optimality criterion. Designing of such plan is based on the coordination of payments (cash expenses) with the cash receipts. This approach allows to synthesize different plans that differ in combinations of financial outflows, and then to select the best one according to a given criterion. The minimum total costs associated with the payment of fines for non-timely financing of expenses were taken as the optimality criterion. Restrictions in the model are the requirement to ensure the minimum allowable cash balances for the subperiods of the planning period, as well as the obligation to make payments during the planning period, taking into account the maturity of these payments. The suggested model with a high degree of efficiency allows to solve the problem of financial resources distribution under uncertainty over time and receipts, coordination of funds inflows and outflows. The practical significance of the research is in developed model application, allowing to improve the financial planning quality, to increase the management efficiency and operational efficiency of a company.

    Views (last year): 33.
  4. Varshavsky L.E.
    Studying indicators of development of oligopolistic markets on the basis of operational calculus
    Computer Research and Modeling, 2019, v. 11, no. 5, pp. 949-963

    The traditional approach to computing optimal game strategies of firms on oligopolistic markets and of indicators of such markets consists in studying linear dynamical games with quadratic criteria and solving generalized matrix Riccati equations.

    The other approach proposed by the author is based on methods of operational calculus (in particular, Z-transform). This approach makes it possible to achieve economic meaningful decisions under wider field of parameter values. It characterizes by simplicity of computations and by necessary for economic analysis visibility. One of its advantages is that in many cases important for economic practice, it, in contrast to the traditional approach, provides the ability to make calculations using widespread spreadsheets, which allows to study the prospects for the development of oligopolistic markets to a wide range of professionals and consumers.

    The article deals with the practical aspects of determining the optimal Nash–Cournot strategies of participants in oligopolistic markets on the basis of operational calculus, in particular the technique of computing the optimal Nash–Cournot strategies in Excel. As an illustration of the opportinities of the proposed methods of calculation, examples close to the practical problems of forecasting indicators of the markets of high-tech products are studied.

    The results of calculations obtained by the author for numerous examples and real economic systems, both using the obtained relations on the basis of spreadsheets and using extended Riccati equations, are very close. In most of the considered practical problems, the deviation of the indicators calculated in accordance with the two approaches, as a rule, does not exceed 1.5–2%. The highest value of relative deviations (up to 3–5%) is observed at the beginning of the forecasting period. In typical cases, the period of relatively noticeable deviations is 3–5 moments of time. After the transition period, there is almost complete agreement of the values of the required indicators using both approaches.

  5. Khorkov A.V., Khorkov A.V.
    Linear and nonlinear optimization models of multiple covering of a bounded plane domain with circles
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1101-1110

    Problems of multiple covering ($k$-covering) of a bounded set $G$ with equal circles of a given radius are well known. They are thoroughly studied under the assumption that $G$ is a finite set. There are several papers concerned with studying this problem in the case where $G$ is a connected set. In this paper, we study the problem of minimizing the number of circles that form a $k$-covering, $k \geqslant 1$, provided that $G$ is a bounded convex plane domain.

    For the above-mentioned problem, we state a 0-1 linear model, a general integer linear model, and a nonlinear model, imposing a constraint on the minimum distance between the centers of covering circles. The latter constraint is due to the fact that in practice one can place at most one device at each point. We establish necessary and sufficient solvability conditions for the linear models and describe one (easily realizable) variant of these conditions in the case where the covered set $G$ is a rectangle.

    We propose some methods for finding an approximate number of circles of a given radius that provide the desired $k$-covering of the set $G$, both with and without constraints on distances between the circles’ centers. We treat the calculated values as approximate upper bounds for the number of circles. We also propose a technique that allows one to get approximate lower bounds for the number of circles that is necessary for providing a $k$-covering of the set $G$. In the general linear model, as distinct from the 0-1 linear model, we require no additional constraint. The difference between the upper and lower bounds for the number of circles characterizes the quality (acceptability) of the constructed $k$-covering.

    We state a nonlinear mathematical model for the $k$-covering problem with the above-mentioned constraints imposed on distances between the centers of covering circles. For this model, we propose an algorithm which (in certain cases) allows one to find more exact solutions to covering problems than those calculated from linear models.

    For implementing the proposed approach, we have developed computer programs and performed numerical experiments. Results of numerical experiments demonstrate the effectiveness of the method.

  6. Ostroukhov P.A., Kamalov R.A., Dvurechensky P.E., Gasnikov A.V.
    Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 357-376

    In this paper we propose high-order (tensor) methods for two types of saddle point problems. Firstly, we consider the classic min-max saddle point problem. Secondly, we consider the search for a stationary point of the saddle point problem objective by its gradient norm minimization. Obviously, the stationary point does not always coincide with the optimal point. However, if we have a linear optimization problem with linear constraints, the algorithm for gradient norm minimization becomes useful. In this case we can reconstruct the solution of the optimization problem of a primal function from the solution of gradient norm minimization of dual function. In this paper we consider both types of problems with no constraints. Additionally, we assume that the objective function is $\mu$-strongly convex by the first argument, $\mu$-strongly concave by the second argument, and that the $p$-th derivative of the objective is Lipschitz-continous.

    For min-max problems we propose two algorithms. Since we consider strongly convex a strongly concave problem, the first algorithm uses the existing tensor method for regular convex concave saddle point problems and accelerates it with the restarts technique. The complexity of such an algorithm is linear. If we additionally assume that our objective is first and second order Lipschitz, we can improve its performance even more. To do this, we can switch to another existing algorithm in its area of quadratic convergence. Thus, we get the second algorithm, which has a global linear convergence rate and a local quadratic convergence rate.

    Finally, in convex optimization there exists a special methodology to solve gradient norm minimization problems by tensor methods. Its main idea is to use existing (near-)optimal algorithms inside a special framework. I want to emphasize that inside this framework we do not necessarily need the assumptions of strong convexity, because we can regularize the convex objective in a special way to make it strongly convex. In our article we transfer this framework on convex-concave objective functions and use it with our aforementioned algorithm with a global linear convergence and a local quadratic convergence rate.

    Since the saddle point problem is a particular case of the monotone variation inequality problem, the proposed methods will also work in solving strongly monotone variational inequality problems.

  7. Skorik S.N., Pirau V.V., Sedov S.A., Dvinskikh D.M.
    Comparsion of stochastic approximation and sample average approximation for saddle point problem with bilinear coupling term
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 381-391

    Stochastic optimization is a current area of research due to significant advances in machine learning and their applications to everyday problems. In this paper, we consider two fundamentally different methods for solving the problem of stochastic optimization — online and offline algorithms. The corresponding algorithms have their qualitative advantages over each other. So, for offline algorithms, it is required to solve an auxiliary problem with high accuracy. However, this can be done in a distributed manner, and this opens up fundamental possibilities such as, for example, the construction of a dual problem. Despite this, both online and offline algorithms pursue a common goal — solving the stochastic optimization problem with a given accuracy. This is reflected in the comparison of the computational complexity of the described algorithms, which is demonstrated in this paper.

    The comparison of the described methods is carried out for two types of stochastic problems — convex optimization and saddles. For problems of stochastic convex optimization, the existing solutions make it possible to compare online and offline algorithms in some detail. In particular, for strongly convex problems, the computational complexity of the algorithms is the same, and the condition of strong convexity can be weakened to the condition of $\gamma$-growth of the objective function. From this point of view, saddle point problems are much less studied. Nevertheless, existing solutions allow us to outline the main directions of research. Thus, significant progress has been made for bilinear saddle point problems using online algorithms. Offline algorithms are represented by just one study. In this paper, this example demonstrates the similarity of both algorithms with convex optimization. The issue of the accuracy of solving the auxiliary problem for saddles was also worked out. On the other hand, the saddle point problem of stochastic optimization generalizes the convex one, that is, it is its logical continuation. This is manifested in the fact that existing results from convex optimization can be transferred to saddles. In this paper, such a transfer is carried out for the results of the online algorithm in the convex case, when the objective function satisfies the $\gamma$-growth condition.

  8. Shumov V.V.
    Protection of biological resources in the coastal area: the mathematical model
    Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1109-1125

    Protection of aquatic biological resources in the coastal area has significant features (a large number of small fishing vessels, the dynamism of the situation, the use of coastal protection), by virtue of which stands in a class of applications. A mathematical model of protection designed for the determination of detection equipment and means of violators of the situation in order to ensure the function of deterrence of illegal activities. Resolves a tactical game-theoretic problem - find the optimal line patrol (parking) means of implementation (guard boats) and optimal removal of seats from the shore fishing violators. Using the methods of the theory of experimental design, linear regression models to assess the contribution of the main factors affecting the results of the simulation.

    In order to enhance the sustainability and adequacy of the model is proposed to use the mechanism of rankings means of protection, based on the borders and the rank and Pareto allows to take into account the principles of protection and further means of protection. To account for the variability of the situation offered several scenarios in which it is advisable to perform calculations.

    Views (last year): 1. Citations: 1 (RSCI).
  9. Silaeva V.A., Silaeva M.V., Silaev A.M.
    Estimation of models parameters for time series with Markov switching regimes
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 903-918

    The paper considers the problem of estimating the parameters of time series described by regression models with Markov switching of two regimes at random instants of time with independent Gaussian noise. For the solution, we propose a variant of the EM algorithm based on the iterative procedure, during which an estimation of the regression parameters is performed for a given sequence of regime switching and an evaluation of the switching sequence for the given parameters of the regression models. In contrast to the well-known methods of estimating regression parameters in the models with Markov switching, which are based on the calculation of a posteriori probabilities of discrete states of the switching sequence, in the paper the estimates are calculated of the switching sequence, which are optimal by the criterion of the maximum of a posteriori probability. As a result, the proposed algorithm turns out to be simpler and requires less calculations. Computer modeling allows to reveal the factors influencing accuracy of estimation. Such factors include the number of observations, the number of unknown regression parameters, the degree of their difference in different modes of operation, and the signal-to-noise ratio which is associated with the coefficient of determination in regression models. The proposed algorithm is applied to the problem of estimating parameters in regression models for the rate of daily return of the RTS index, depending on the returns of the S&P 500 index and Gazprom shares for the period from 2013 to 2018. Comparison of the estimates of the parameters found using the proposed algorithm is carried out with the estimates that are formed using the EViews econometric package and with estimates of the ordinary least squares method without taking into account regimes switching. The account of regimes switching allows to receive more exact representation about structure of a statistical dependence of investigated variables. In switching models, the increase in the signal-to-noise ratio leads to the fact that the differences in the estimates produced by the proposed algorithm and using the EViews program are reduced.

    Views (last year): 36.
  10. Zabotin, V.I., Chernyshevskij P.A.
    Extension of Strongin’s Global Optimization Algorithm to a Function Continuous on a Compact Interval
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1111-1119

    The Lipschitz continuous property has been used for a long time to solve the global optimization problem and continues to be used. Here we can mention the work of Piyavskii, Yevtushenko, Strongin, Shubert, Sergeyev, Kvasov and others. Most papers assume a priori knowledge of the Lipschitz constant, but the derivation of this constant is a separate problem. Further still, we must prove that an objective function is really Lipschitz, and it is a complicated problem too. In the case where the Lipschitz continuity is established, Strongin proposed an algorithm for global optimization of a satisfying Lipschitz condition on a compact interval function without any a priori knowledge of the Lipschitz estimate. The algorithm not only finds a global extremum, but it determines the Lipschitz estimate too. It is known that every function that satisfies the Lipchitz condition on a compact convex set is uniformly continuous, but the reverse is not always true. However, there exist models (Arutyunova, Dulliev, Zabotin) whose study requires a minimization of the continuous but definitely not Lipschitz function. One of the algorithms for solving such a problem was proposed by R. J. Vanderbei. In his work he introduced some generalization of the Lipchitz property named $\varepsilon$-Lipchitz and proved that a function defined on a compact convex set is uniformly continuous if and only if it satisfies the $\varepsilon$-Lipchitz condition. The above-mentioned property allowed him to extend Piyavskii’s method. However, Vanderbei assumed that for a given value of $\varepsilon$ it is possible to obtain an associate Lipschitz $\varepsilon$-constant, which is a very difficult problem. Thus, there is a need to construct, for a function continuous on a compact convex domain, a global optimization algorithm which works in some way like Strongin’s algorithm, i.e., without any a priori knowledge of the Lipschitz $\varepsilon$-constant. In this paper we propose an extension of Strongin’s global optimization algorithm to a function continuous on a compact interval using the $\varepsilon$-Lipchitz conception, prove its convergence and solve some numerical examples using the software that implements the developed method.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"