Результаты поиска по 'optimized functionality':
Найдено статей: 95
  1. Sviridenko A.B.
    Designing a zero on a linear manifold, a polyhedron, and a vertex of a polyhedron. Newton methods of minimization
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 563-591

    We consider the approaches to the construction of methods for solving four-dimensional programming problems for calculating directions for multiple minimizations of smooth functions on a set of a given set of linear equalities. The approach consists of two stages.

    At the first stage, the problem of quadratic programming is transformed by a numerically stable direct multiplicative algorithm into an equivalent problem of designing the origin of coordinates on a linear manifold, which defines a new mathematical formulation of the dual quadratic problem. For this, a numerically stable direct multiplicative method for solving systems of linear equations is proposed, taking into account the sparsity of matrices presented in packaged form. The advantage of this approach is to calculate the modified Cholesky factors to construct a substantially positive definite matrix of the system of equations and its solution in the framework of one procedure. And also in the possibility of minimizing the filling of the main rows of multipliers without losing the accuracy of the results, and no changes are made in the position of the next processed row of the matrix, which allows the use of static data storage formats.

    At the second stage, the necessary and sufficient optimality conditions in the form of Kuhn–Tucker determine the calculation of the direction of descent — the solution of the dual quadratic problem is reduced to solving a system of linear equations with symmetric positive definite matrix for calculating of Lagrange's coefficients multipliers and to substituting the solution into the formula for calculating the direction of descent.

    It is proved that the proposed approach to the calculation of the direction of descent by numerically stable direct multiplicative methods at one iteration requires a cubic law less computation than one iteration compared to the well-known dual method of Gill and Murray. Besides, the proposed method allows the organization of the computational process from any starting point that the user chooses as the initial approximation of the solution.

    Variants of the problem of designing the origin of coordinates on a linear manifold, a convex polyhedron and a vertex of a convex polyhedron are presented. Also the relationship and implementation of methods for solving these problems are described.

    Views (last year): 6.
  2. Zelenkov G.A., Sviridenko A.B.
    Approach to development of algorithms of Newtonian methods of unconstrained optimization, their software implementation and benchmarking
    Computer Research and Modeling, 2013, v. 5, no. 3, pp. 367-377

    The approach to increase efficiency of Gill and Murray's algorithm of Newtonian methods of unconstrained optimization with step adjustment creation is offered, rests on Cholesky’s factorization. It is proved that the strategy of choice of the descent direction also determines the solution of the problem of scaling of steps at descent, and approximation by non-quadratic functions, and integration with a method of a confidential vicinity.

    Views (last year): 2. Citations: 7 (RSCI).
  3. Akhmetvaleev A.M., Katasev A.S.
    Neural network model of human intoxication functional state determining in some problems of transport safety solution
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 285-293

    This article solves the problem of vehicles drivers intoxication functional statedetermining. Its solution is relevant in the transport security field during pre-trip medical examination. The problem solution is based on the papillomometry method application, which allows to evaluate the driver state by his pupillary reaction to illumination change. The problem is to determine the state of driver inebriation by the analysis of the papillogram parameters values — a time series characterizing the change in pupil dimensions upon exposure to a short-time light pulse. For the papillograms analysis it is proposed to use a neural network. A neural network model for determining the drivers intoxication functional state is developed. For its training, specially prepared data samples are used which are the values of the following parameters of pupillary reactions grouped into two classes of functional states of drivers: initial diameter, minimum diameter, half-constriction diameter, final diameter, narrowing amplitude, rate of constriction, expansion rate, latent reaction time, the contraction time, the expansion time, the half-contraction time, and the half-expansion time. An example of the initial data is given. Based on their analysis, a neural network model is constructed in the form of a single-layer perceptron consisting of twelve input neurons, twenty-five neurons of the hidden layer, and one output neuron. To increase the model adequacy using the method of ROC analysis, the optimal cut-off point for the classes of solutions at the output of the neural network is determined. A scheme for determining the drivers intoxication state is proposed, which includes the following steps: pupillary reaction video registration, papillogram construction, parameters values calculation, data analysis on the base of the neural network model, driver’s condition classification as “norm” or “rejection of the norm”, making decisions on the person being audited. A medical worker conducting driver examination is presented with a neural network assessment of his intoxication state. On the basis of this assessment, an opinion on the admission or removal of the driver from driving the vehicle is drawn. Thus, the neural network model solves the problem of increasing the efficiency of pre-trip medical examination by increasing the reliability of the decisions made.

    Views (last year): 42. Citations: 2 (RSCI).
  4. Gasnikov A.V., Gorbunov E.A., Kovalev D.A., Mohammed A.A., Chernousova E.O.
    The global rate of convergence for optimal tensor methods in smooth convex optimization
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 737-753

    In this work we consider Monteiro – Svaiter accelerated hybrid proximal extragradient (A-HPE) framework and accelerated Newton proximal extragradient (A-NPE) framework. The last framework contains an optimal method for rather smooth convex optimization problems with second-order oracle. We generalize A-NPE framework for higher order derivative oracle (schemes). We replace Newton’s type step in A-NPE that was used for auxiliary problem by Newton’s regularized (tensor) type step (Yu. Nesterov, 2018). Moreover we generalize large step A-HPE/A-NPE framework by replacing Monteiro – Svaiter’s large step condition so that this framework could work for high-order schemes. The main contribution of the paper is as follows: we propose optimal highorder methods for convex optimization problems. As far as we know for that moment there exist only zero, first and second order optimal methods that work according to the lower bounds. For higher order schemes there exists a gap between the lower bounds (Arjevani, Shamir, Shiff, 2017) and existing high-order (tensor) methods (Nesterov – Polyak, 2006; Yu.Nesterov, 2008; M. Baes, 2009; Yu.Nesterov, 2018). Asymptotically the ratio of the rates of convergences for the best existing methods and lower bounds is about 1.5. In this work we eliminate this gap and show that lower bounds are tight. We also consider rather smooth strongly convex optimization problems and show how to generalize the proposed methods to this case. The basic idea is to use restart technique until iteration sequence reach the region of quadratic convergence of Newton method and then use Newton method. One can show that the considered method converges with optimal rates up to a logarithmic factor. Note, that proposed in this work technique can be generalized in the case when we can’t solve auxiliary problem exactly, moreover we can’t even calculate the derivatives of the functional exactly. Moreover, the proposed technique can be generalized to the composite optimization problems and in particular to the constraint convex optimization problems. We also formulate a list of open questions that arise around the main result of this paper (optimal universal method of high order e.t.c.).

    Views (last year): 75.
  5. Alkousa M.S.
    On some stochastic mirror descent methods for constrained online optimization problems
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 205-217

    The problem of online convex optimization naturally occurs in cases when there is an update of statistical information. The mirror descent method is well known for non-smooth optimization problems. Mirror descent is an extension of the subgradient method for solving non-smooth convex optimization problems in the case of a non-Euclidean distance. This paper is devoted to a stochastic variant of recently proposed Mirror Descent methods for convex online optimization problems with convex Lipschitz (generally, non-smooth) functional constraints. This means that we can still use the value of the functional constraint, but instead of (sub)gradient of the objective functional and the functional constraint, we use their stochastic (sub)gradients. More precisely, assume that on a closed subset of $n$-dimensional vector space, $N$ convex Lipschitz non-smooth functionals are given. The problem is to minimize the arithmetic mean of these functionals with a convex Lipschitz constraint. Two methods are proposed, for solving this problem, using stochastic (sub)gradients: adaptive method (does not require knowledge of Lipschitz constant neither for the objective functional, nor for the functional of constraint) and non-adaptivemethod (requires knowledge of Lipschitz constant for the objective functional and the functional of constraint). Note that it is allowed to calculate the stochastic (sub)gradient of each functional only once. In the case of non-negative regret, we find that the number of non-productive steps is $O$($N$), which indicates the optimality of the proposed methods. We consider an arbitrary proximal structure, which is essential for decisionmaking problems. The results of numerical experiments are presented, allowing to compare the work of adaptive and non-adaptive methods for some examples. It is shown that the adaptive method can significantly improve the number of the found solutions.

    Views (last year): 42.
  6. Tyurin A.I.
    Primal-dual fast gradient method with a model
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 263-274

    In this work we consider a possibility to use the conception of $(\delta, L)$-model of a function for optimization tasks, whereby solving a primal problem there is a necessity to recover a solution of a dual problem. The conception of $(\delta, L)$-model is based on the conception of $(\delta, L)$-oracle which was proposed by Devolder–Glineur–Nesterov, herewith the authors proposed approximate a function with an upper bound using a convex quadratic function with some additive noise $\delta$. They managed to get convex quadratic upper bounds with noise even for nonsmooth functions. The conception of $(\delta, L)$-model continues this idea by using instead of a convex quadratic function a more complex convex function in an upper bound. Possibility to recover the solution of a dual problem gives great benefits in different problems, for instance, in some cases, it is faster to find a solution in a primal problem than in a dual problem. Note that primal-dual methods are well studied, but usually each class of optimization problems has its own primal-dual method. Our goal is to develop a method which can find solutions in different classes of optimization problems. This is realized through the use of the conception of $(\delta, L)$-model and adaptive structure of our methods. Thereby, we developed primal-dual adaptive gradient method and fast gradient method with $(\delta, L)$-model and proved convergence rates of the methods, moreover, for some classes of optimization problems the rates are optimal. The main idea is the following: we find a dual solution to an approximation of a primal problem using the conception of $(\delta, L)$-model. It is much easier to find a solution to an approximated problem, however, we have to do it in each step of our method, thereby the principle of “divide and conquer” is realized.

  7. Spevak L.P., Nefedova O.A.
    Numerical solution to a two-dimensional nonlinear heat equation using radial basis functions
    Computer Research and Modeling, 2022, v. 14, no. 1, pp. 9-22

    The paper presents a numerical solution to the heat wave motion problem for a degenerate second-order nonlinear parabolic equation with a source term. The nonlinearity is conditioned by the power dependence of the heat conduction coefficient on temperature. The problem for the case of two spatial variables is considered with the boundary condition specifying the heat wave motion law. A new solution algorithm based on an expansion in radial basis functions and the boundary element method is proposed. The solution is constructed stepwise in time with finite difference time approximation. At each time step, a boundary value problem for the Poisson equation corresponding to the original equation at a fixed time is solved. The solution to this problem is constructed iteratively as the sum of a particular solution to the nonhomogeneous equation and a solution to the corresponding homogeneous equation satisfying the boundary conditions. The homogeneous equation is solved by the boundary element method. The particular solution is sought by the collocation method using inhomogeneity expansion in radial basis functions. The calculation algorithm is optimized by parallelizing the computations. The algorithm is implemented as a program written in the C++ language. The parallel computations are organized by using the OpenCL standard, and this allows one to run the same parallel code either on multi-core CPUs or on graphic CPUs. Test cases are solved to evaluate the effectiveness of the proposed solution method and the correctness of the developed computational technique. The calculation results are compared with known exact solutions, as well as with the results we obtained earlier. The accuracy of the solutions and the calculation time are estimated. The effectiveness of using various systems of radial basis functions to solve the problems under study is analyzed. The most suitable system of functions is selected. The implemented complex computational experiment shows higher calculation accuracy of the proposed new algorithm than that of the previously developed one.

  8. Agafonov A.D.
    Lower bounds for conditional gradient type methods for minimizing smooth strongly convex functions
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 213-223

    In this paper, we consider conditional gradient methods for optimizing strongly convex functions. These are methods that use a linear minimization oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the subproblem

    \[ \text{Argmin}_{x\in X}{\langle p,\,x \rangle}. \]There are a variety of conditional gradient methods that have a linear convergence rate in a strongly convex case. However, in all these methods, the dimension of the problem is included in the rate of convergence, which in modern applications can be very large. In this paper, we prove that in the strongly convex case, the convergence rate of the conditional gradient methods in the best case depends on the dimension of the problem $ n $ as $ \widetilde {\Omega} \left(\!\sqrt {n}\right) $. Thus, the conditional gradient methods may turn out to be ineffective for solving strongly convex optimization problems of large dimensions.

    Also, the application of conditional gradient methods to minimization problems of a quadratic form is considered. The effectiveness of the Frank – Wolfe method for solving the quadratic optimization problem in the convex case on a simplex (PageRank) has already been proved. This work shows that the use of conditional gradient methods to solve the minimization problem of a quadratic form in a strongly convex case is ineffective due to the presence of dimension in the convergence rate of these methods. Therefore, the Shrinking Conditional Gradient method is considered. Its difference from the conditional gradient methods is that it uses a modified linear minimization oracle. It's an oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the subproblem \[ \text{Argmin}\{\langle p, \,x \rangle\colon x\in X, \;\|x-x_0^{}\| \leqslant R \}. \] The convergence rate of such an algorithm does not depend on dimension. Using the Shrinking Conditional Gradient method the complexity (the total number of arithmetic operations) of solving the minimization problem of quadratic form on a $ \infty $-ball is obtained. The resulting evaluation of the method is comparable to the complexity of the gradient method.

  9. Berger A.I., Guda S.A.
    Optimal threshold selection algorithms for multi-label classification: property study
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238

    Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the $F$-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation $\boldsymbol V$, defined on a unit square on the plane of average precision $P$ and recall $R$. Using this transformation, two algorithms are proposed for optimization: the $F$-measure linearization method and the method of $\boldsymbol V$ domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the $F$-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of $\boldsymbol V$, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of $\boldsymbol V$ domain analysis — the polar radius.

  10. Sviridenko A.B.
    The iterations’ number estimation for strongly polynomial linear programming algorithms
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 249-285

    A direct algorithm for solving a linear programming problem (LP), given in canonical form, is considered. The algorithm consists of two successive stages, in which the following LP problems are solved by a direct method: a non-degenerate auxiliary problem at the first stage and some problem equivalent to the original one at the second. The construction of the auxiliary problem is based on a multiplicative version of the Gaussian exclusion method, in the very structure of which there are possibilities: identification of incompatibility and linear dependence of constraints; identification of variables whose optimal values are obviously zero; the actual exclusion of direct variables and the reduction of the dimension of the space in which the solution of the original problem is determined. In the process of actual exclusion of variables, the algorithm generates a sequence of multipliers, the main rows of which form a matrix of constraints of the auxiliary problem, and the possibility of minimizing the filling of the main rows of multipliers is inherent in the very structure of direct methods. At the same time, there is no need to transfer information (basis, plan and optimal value of the objective function) to the second stage of the algorithm and apply one of the ways to eliminate looping to guarantee final convergence.

    Two variants of the algorithm for solving the auxiliary problem in conjugate canonical form are presented. The first one is based on its solution by a direct algorithm in terms of the simplex method, and the second one is based on solving a problem dual to it by the simplex method. It is shown that both variants of the algorithm for the same initial data (inputs) generate the same sequence of points: the basic solution and the current dual solution of the vector of row estimates. Hence, it is concluded that the direct algorithm is an algorithm of the simplex method type. It is also shown that the comparison of numerical schemes leads to the conclusion that the direct algorithm allows to reduce, according to the cubic law, the number of arithmetic operations necessary to solve the auxiliary problem, compared with the simplex method. An estimate of the number of iterations is given.

Pages: next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"