All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Two families of the simple iteration method, in comparison
Computer Research and Modeling, 2012, v. 4, no. 1, pp. 5-29Convergence to the solution of the linear system with real quadrate non singular matrix A with real necessary different sign eigen values of two families of simple iteration method: two-parametric and symmetrized one-parametric generated by these A and b is considered. Also these methods are compared when matrix A is a symmetric one. In this case it is proved that the coefficient of the optimal compression of two-parametric family is strongly less than the coefficient of the optimal compression of symmetrized one-parametric family of the simple iteration method.
Keywords: simple iteration method, symmetric matrix.Views (last year): 1. -
Grid-cloud services simulation for NICA project, as a mean of the efficiency increasing of their development
Computer Research and Modeling, 2014, v. 6, no. 5, pp. 635-642Views (last year): 4. Citations: 3 (RSCI).A new grid and cloud services simulation for NICA accelerator complex data storage and processing system are described. This system is focused on improving the efficiency of the grid-cloud systems development by using work quality indicators of some real system to design and predict its evolution. For these purpose the simulation program are combined with real monitoring system of the grid-cloud service through a special database. An example of the program usage to simulate a sufficiently general cloud structure, which can be used for more common purposes, is given.
-
Designing a zero on a linear manifold, a polyhedron, and a vertex of a polyhedron. Newton methods of minimization
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 563-591Views (last year): 6.We consider the approaches to the construction of methods for solving four-dimensional programming problems for calculating directions for multiple minimizations of smooth functions on a set of a given set of linear equalities. The approach consists of two stages.
At the first stage, the problem of quadratic programming is transformed by a numerically stable direct multiplicative algorithm into an equivalent problem of designing the origin of coordinates on a linear manifold, which defines a new mathematical formulation of the dual quadratic problem. For this, a numerically stable direct multiplicative method for solving systems of linear equations is proposed, taking into account the sparsity of matrices presented in packaged form. The advantage of this approach is to calculate the modified Cholesky factors to construct a substantially positive definite matrix of the system of equations and its solution in the framework of one procedure. And also in the possibility of minimizing the filling of the main rows of multipliers without losing the accuracy of the results, and no changes are made in the position of the next processed row of the matrix, which allows the use of static data storage formats.
At the second stage, the necessary and sufficient optimality conditions in the form of Kuhn–Tucker determine the calculation of the direction of descent — the solution of the dual quadratic problem is reduced to solving a system of linear equations with symmetric positive definite matrix for calculating of Lagrange's coefficients multipliers and to substituting the solution into the formula for calculating the direction of descent.
It is proved that the proposed approach to the calculation of the direction of descent by numerically stable direct multiplicative methods at one iteration requires a cubic law less computation than one iteration compared to the well-known dual method of Gill and Murray. Besides, the proposed method allows the organization of the computational process from any starting point that the user chooses as the initial approximation of the solution.
Variants of the problem of designing the origin of coordinates on a linear manifold, a convex polyhedron and a vertex of a convex polyhedron are presented. Also the relationship and implementation of methods for solving these problems are described.
-
Approach to development of algorithms of Newtonian methods of unconstrained optimization, their software implementation and benchmarking
Computer Research and Modeling, 2013, v. 5, no. 3, pp. 367-377Views (last year): 2. Citations: 7 (RSCI).The approach to increase efficiency of Gill and Murray's algorithm of Newtonian methods of unconstrained optimization with step adjustment creation is offered, rests on Cholesky’s factorization. It is proved that the strategy of choice of the descent direction also determines the solution of the problem of scaling of steps at descent, and approximation by non-quadratic functions, and integration with a method of a confidential vicinity.
-
Direct multiplicative methods for sparse matrices. Unbalanced linear systems.
Computer Research and Modeling, 2016, v. 8, no. 6, pp. 833-860Views (last year): 20. Citations: 2 (RSCI).Small practical value of many numerical methods for solving single-ended systems of linear equations with ill-conditioned matrices due to the fact that these methods in the practice behave quite differently than in the case of precise calculations. Historically, sustainability is not enough attention was given, unlike in numerical algebra ‘medium-sized’, and emphasis is given to solving the problems of maximal order in data capabilities of the computer, including the expense of some loss of accuracy. Therefore, the main objects of study is the most appropriate storage of information contained in the sparse matrix; maintaining the highest degree of rarefaction at all stages of the computational process. Thus, the development of efficient numerical methods for solving unstable systems refers to the actual problems of computational mathematics.
In this paper, the approach to the construction of numerically stable direct multiplier methods for solving systems of linear equations, taking into account sparseness of matrices, presented in packaged form. The advantage of the approach consists in minimization of filling the main lines of the multipliers without compromising accuracy of the results and changes in the position of the next processed row of the matrix are made that allows you to use static data storage formats. The storage format of sparse matrices has been studied and the advantage of this format consists in possibility of parallel execution any matrix operations without unboxing, which significantly reduces the execution time and memory footprint.
Direct multiplier methods for solving systems of linear equations are best suited for solving problems of large size on a computer — sparse matrix systems allow you to get multipliers, the main row of which is also sparse, and the operation of multiplication of a vector-row of the multiplier according to the complexity proportional to the number of nonzero elements of this multiplier.
As a direct continuation of this work is proposed in the basis for constructing a direct multiplier algorithm of linear programming to put a modification of the direct multiplier algorithm for solving systems of linear equations based on integration of technique of linear programming for methods to select the host item. Direct multiplicative methods of linear programming are best suited for the construction of a direct multiplicative algorithm set the direction of descent Newton methods in unconstrained optimization by integrating one of the existing design techniques significantly positive definite matrix of the second derivatives.
-
Direct multiplicative methods for sparse matrices. Linear programming
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 143-165Views (last year): 10. Citations: 2 (RSCI).Multiplicative methods for sparse matrices are best suited to reduce the complexity of operations solving systems of linear equations performed on each iteration of the simplex method. The matrix of constraints in these problems of sparsely populated nonzero elements, which allows to obtain the multipliers, the main columns which are also sparse, and the operation of multiplication of a vector by a multiplier according to the complexity proportional to the number of nonzero elements of this multiplier. In addition, the transition to the adjacent basis multiplier representation quite easily corrected. To improve the efficiency of such methods requires a decrease in occupancy multiplicative representation of the nonzero elements. However, at each iteration of the algorithm to the sequence of multipliers added another. As the complexity of multiplication grows and linearly depends on the length of the sequence. So you want to run from time to time the recalculation of inverse matrix, getting it from the unit. Overall, however, the problem is not solved. In addition, the set of multipliers is a sequence of structures, and the size of this sequence is inconvenient is large and not precisely known. Multiplicative methods do not take into account the factors of the high degree of sparseness of the original matrices and constraints of equality, require the determination of initial basic feasible solution of the problem and, consequently, do not allow to reduce the dimensionality of a linear programming problem and the regular procedure of compression — dimensionality reduction of multipliers and exceptions of the nonzero elements from all the main columns of multipliers obtained in previous iterations. Thus, the development of numerical methods for the solution of linear programming problems, which allows to overcome or substantially reduce the shortcomings of the schemes implementation of the simplex method, refers to the current problems of computational mathematics.
In this paper, the approach to the construction of numerically stable direct multiplier methods for solving problems in linear programming, taking into account sparseness of matrices, presented in packaged form. The advantage of the approach is to reduce dimensionality and minimize filling of the main rows of multipliers without compromising accuracy of the results and changes in the position of the next processed row of the matrix are made that allows you to use static data storage formats.
As a direct continuation of this work is the basis for constructing a direct multiplicative algorithm set the direction of descent in the Newton methods for unconstrained optimization is proposed to put a modification of the direct multiplier method, linear programming by integrating one of the existing design techniques significantly positive definite matrix of the second derivatives.
-
Neural network model of human intoxication functional state determining in some problems of transport safety solution
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 285-293Views (last year): 42. Citations: 2 (RSCI).This article solves the problem of vehicles drivers intoxication functional statedetermining. Its solution is relevant in the transport security field during pre-trip medical examination. The problem solution is based on the papillomometry method application, which allows to evaluate the driver state by his pupillary reaction to illumination change. The problem is to determine the state of driver inebriation by the analysis of the papillogram parameters values — a time series characterizing the change in pupil dimensions upon exposure to a short-time light pulse. For the papillograms analysis it is proposed to use a neural network. A neural network model for determining the drivers intoxication functional state is developed. For its training, specially prepared data samples are used which are the values of the following parameters of pupillary reactions grouped into two classes of functional states of drivers: initial diameter, minimum diameter, half-constriction diameter, final diameter, narrowing amplitude, rate of constriction, expansion rate, latent reaction time, the contraction time, the expansion time, the half-contraction time, and the half-expansion time. An example of the initial data is given. Based on their analysis, a neural network model is constructed in the form of a single-layer perceptron consisting of twelve input neurons, twenty-five neurons of the hidden layer, and one output neuron. To increase the model adequacy using the method of ROC analysis, the optimal cut-off point for the classes of solutions at the output of the neural network is determined. A scheme for determining the drivers intoxication state is proposed, which includes the following steps: pupillary reaction video registration, papillogram construction, parameters values calculation, data analysis on the base of the neural network model, driver’s condition classification as “norm” or “rejection of the norm”, making decisions on the person being audited. A medical worker conducting driver examination is presented with a neural network assessment of his intoxication state. On the basis of this assessment, an opinion on the admission or removal of the driver from driving the vehicle is drawn. Thus, the neural network model solves the problem of increasing the efficiency of pre-trip medical examination by increasing the reliability of the decisions made.
-
The global rate of convergence for optimal tensor methods in smooth convex optimization
Computer Research and Modeling, 2018, v. 10, no. 6, pp. 737-753Views (last year): 75.In this work we consider Monteiro – Svaiter accelerated hybrid proximal extragradient (A-HPE) framework and accelerated Newton proximal extragradient (A-NPE) framework. The last framework contains an optimal method for rather smooth convex optimization problems with second-order oracle. We generalize A-NPE framework for higher order derivative oracle (schemes). We replace Newton’s type step in A-NPE that was used for auxiliary problem by Newton’s regularized (tensor) type step (Yu. Nesterov, 2018). Moreover we generalize large step A-HPE/A-NPE framework by replacing Monteiro – Svaiter’s large step condition so that this framework could work for high-order schemes. The main contribution of the paper is as follows: we propose optimal highorder methods for convex optimization problems. As far as we know for that moment there exist only zero, first and second order optimal methods that work according to the lower bounds. For higher order schemes there exists a gap between the lower bounds (Arjevani, Shamir, Shiff, 2017) and existing high-order (tensor) methods (Nesterov – Polyak, 2006; Yu.Nesterov, 2008; M. Baes, 2009; Yu.Nesterov, 2018). Asymptotically the ratio of the rates of convergences for the best existing methods and lower bounds is about 1.5. In this work we eliminate this gap and show that lower bounds are tight. We also consider rather smooth strongly convex optimization problems and show how to generalize the proposed methods to this case. The basic idea is to use restart technique until iteration sequence reach the region of quadratic convergence of Newton method and then use Newton method. One can show that the considered method converges with optimal rates up to a logarithmic factor. Note, that proposed in this work technique can be generalized in the case when we can’t solve auxiliary problem exactly, moreover we can’t even calculate the derivatives of the functional exactly. Moreover, the proposed technique can be generalized to the composite optimization problems and in particular to the constraint convex optimization problems. We also formulate a list of open questions that arise around the main result of this paper (optimal universal method of high order e.t.c.).
-
On some stochastic mirror descent methods for constrained online optimization problems
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 205-217Views (last year): 42.The problem of online convex optimization naturally occurs in cases when there is an update of statistical information. The mirror descent method is well known for non-smooth optimization problems. Mirror descent is an extension of the subgradient method for solving non-smooth convex optimization problems in the case of a non-Euclidean distance. This paper is devoted to a stochastic variant of recently proposed Mirror Descent methods for convex online optimization problems with convex Lipschitz (generally, non-smooth) functional constraints. This means that we can still use the value of the functional constraint, but instead of (sub)gradient of the objective functional and the functional constraint, we use their stochastic (sub)gradients. More precisely, assume that on a closed subset of $n$-dimensional vector space, $N$ convex Lipschitz non-smooth functionals are given. The problem is to minimize the arithmetic mean of these functionals with a convex Lipschitz constraint. Two methods are proposed, for solving this problem, using stochastic (sub)gradients: adaptive method (does not require knowledge of Lipschitz constant neither for the objective functional, nor for the functional of constraint) and non-adaptivemethod (requires knowledge of Lipschitz constant for the objective functional and the functional of constraint). Note that it is allowed to calculate the stochastic (sub)gradient of each functional only once. In the case of non-negative regret, we find that the number of non-productive steps is $O$($N$), which indicates the optimality of the proposed methods. We consider an arbitrary proximal structure, which is essential for decisionmaking problems. The results of numerical experiments are presented, allowing to compare the work of adaptive and non-adaptive methods for some examples. It is shown that the adaptive method can significantly improve the number of the found solutions.
-
Development of network computational models for the study of nonlinear wave processes on graphs
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 777-814In various applications arise problems modeled by nonlinear partial differential equations on graphs (networks, trees). In order to study such problems and various extreme situations arose in the problems of designing and optimizing networks developed the computational model based on solving the corresponding boundary problems for partial differential equations of hyperbolic type on graphs (networks, trees). As applications, three different problems were chosen solved in the framework of the general approach of network computational models. The first was modeling of traffic flow. In solving this problem, a macroscopic approach was used in which the transport flow is described by a nonlinear system of second-order hyperbolic equations. The results of numerical simulations showed that the model developed as part of the proposed approach well reproduces the real situation various sections of the Moscow transport network on significant time intervals and can also be used to select the most optimal traffic management strategy in the city. The second was modeling of data flows in computer networks. In this problem data flows of various connections in packet data network were simulated as some continuous medium flows. Conceptual and mathematical network models are proposed. The numerical simulation was carried out in comparison with the NS-2 network simulation system. The results showed that in comparison with the NS-2 packet model the developed streaming model demonstrates significant savings in computing resources while ensuring a good level of similarity and allows us to simulate the behavior of complex globally distributed IP networks. The third was simulation of the distribution of gas impurities in ventilation networks. It was developed the computational mathematical model for the propagation of finely dispersed or gas impurities in ventilation networks using the gas dynamics equations by numerical linking of regions of different sizes. The calculations shown that the model with good accuracy allows to determine the distribution of gas-dynamic parameters in the pipeline network and solve the problems of dynamic ventilation management.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"