All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Lower bounds for conditional gradient type methods for minimizing smooth strongly convex functions
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 213-223In this paper, we consider conditional gradient methods for optimizing strongly convex functions. These are methods that use a linear minimization oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the subproblem
\[ \text{Argmin}_{x\in X}{\langle p,\,x \rangle}. \]There are a variety of conditional gradient methods that have a linear convergence rate in a strongly convex case. However, in all these methods, the dimension of the problem is included in the rate of convergence, which in modern applications can be very large. In this paper, we prove that in the strongly convex case, the convergence rate of the conditional gradient methods in the best case depends on the dimension of the problem $ n $ as $ \widetilde {\Omega} \left(\!\sqrt {n}\right) $. Thus, the conditional gradient methods may turn out to be ineffective for solving strongly convex optimization problems of large dimensions.
Also, the application of conditional gradient methods to minimization problems of a quadratic form is considered. The effectiveness of the Frank – Wolfe method for solving the quadratic optimization problem in the convex case on a simplex (PageRank) has already been proved. This work shows that the use of conditional gradient methods to solve the minimization problem of a quadratic form in a strongly convex case is ineffective due to the presence of dimension in the convergence rate of these methods. Therefore, the Shrinking Conditional Gradient method is considered. Its difference from the conditional gradient methods is that it uses a modified linear minimization oracle. It's an oracle, which, for a given vector $p \in \mathbb{R}^n$, computes the solution of the subproblem \[ \text{Argmin}\{\langle p, \,x \rangle\colon x\in X, \;\|x-x_0^{}\| \leqslant R \}. \] The convergence rate of such an algorithm does not depend on dimension. Using the Shrinking Conditional Gradient method the complexity (the total number of arithmetic operations) of solving the minimization problem of quadratic form on a $ \infty $-ball is obtained. The resulting evaluation of the method is comparable to the complexity of the gradient method.
Keywords: Frank –Wolfe method, Shrinking Conditional Gradient. -
Modeling of disassembly processes of complex products
Computer Research and Modeling, 2022, v. 14, no. 3, pp. 525-537The work is devoted to modeling the processes of disassembling complex products in CADsystems. The ability to dismantle a product in a given sequence is formed at the early design stages, and is implemented at the end of the life cycle. Therefore, modern CAD-systems should have tools for assessing the complexity of dismantling parts and assembly units of a product. A hypergraph model of the mechanical structure of the product is proposed. It is shown that the mathematical description of coherent and sequential disassembly operations is the normal cutting of the edge of the hypergraph. A theorem on the properties of normal cuts is proved. This theorem allows us to organize a simple recursive procedure for generating all cuts of the hypergraph. The set of all cuts is represented as an AND/OR-tree. The tree contains information about plans for disassembling the product and its parts. Mathematical descriptions of various types of disassembly processes are proposed: complete, incomplete, linear, nonlinear. It is shown that the decisive graph of the AND/OR-tree is a model of disassembling the product and all its components obtained in the process of dismantling. An important characteristic of the complexity of dismantling parts is considered — the depth of nesting. A method of effective calculation of the estimate from below has been developed for this characteristic.
-
Optimal threshold selection algorithms for multi-label classification: property study
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the $F$-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation $\boldsymbol V$, defined on a unit square on the plane of average precision $P$ and recall $R$. Using this transformation, two algorithms are proposed for optimization: the $F$-measure linearization method and the method of $\boldsymbol V$ domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the $F$-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of $\boldsymbol V$, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of $\boldsymbol V$ domain analysis — the polar radius.
-
Stationary states and bifurcations in a one-dimensional active medium of oscillators
Computer Research and Modeling, 2023, v. 15, no. 3, pp. 491-512This article presents the results of an analytical and computer study of the collective dynamic properties of a chain of self-oscillating systems (conditionally — oscillators). It is assumed that the couplings of individual elements of the chain are non-reciprocal, unidirectional. More precisely, it is assumed that each element of the chain is under the influence of the previous one, while the reverse reaction is absent (physically insignificant). This is the main feature of the chain. This system can be interpreted as an active discrete medium with unidirectional transfer, in particular, the transfer of a matter. Such chains can represent mathematical models of real systems having a lattice structure that occur in various fields of natural science and technology: physics, chemistry, biology, radio engineering, economics, etc. They can also represent models of technological and computational processes. Nonlinear self-oscillating systems (conditionally, oscillators) with a wide “spectrum” of potentially possible individual self-oscillations, from periodic to chaotic, were chosen as the “elements” of the lattice. This allows one to explore various dynamic modes of the chain from regular to chaotic, changing the parameters of the elements and not changing the nature of the elements themselves. The joint application of qualitative methods of the theory of dynamical systems and qualitative-numerical methods allows one to obtain a clear picture of all possible dynamic regimes of the chain. The conditions for the existence and stability of spatially-homogeneous dynamic regimes (deterministic and chaotic) of the chain are studied. The analytical results are illustrated by a numerical experiment. The dynamical regimes of the chain are studied under perturbations of parameters at its boundary. The possibility of controlling the dynamic regimes of the chain by turning on the necessary perturbation at the boundary is shown. Various cases of the dynamics of chains comprised of inhomogeneous (different in their parameters) elements are considered. The global chaotic synchronization (of all oscillators in the chain) is studied analytically and numerically.
Keywords: dynamical system, lattice, bifurcations, oscillator, phase space, dynamical chaos, synchronization. -
Simulation of turbulent compressible flows in the FlowVision software
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 805-825Simulation of turbulent compressible gas flows using turbulence models $k-\varepsilon$ standard (KES), $k-\varepsilon$ FlowVision (KEFV) and SST $k-\omega$ is discussed in the given article. A new version of turbulence model KEFV is presented. The results of its testing are shown. Numerical investigation of the discharge of an over-expanded jet from a conic nozzle into unlimited space is performed. The results are compared against experimental data. The dependence of the results on computational mesh is demonstrated. The dependence of the results on turbulence specified at the nozzle inlet is demonstrated. The conclusion is drawn about necessity to allow for compressibility in two-parametric turbulence models. The simple method proposed by Wilcox in 1994 suits well for this purpose. As a result, the range of applicability of the three aforementioned two-parametric turbulence models is essentially extended. Particular values of the constants responsible for the account of compressibility in the Wilcox approach are proposed. It is recommended to specify these values in simulations of compressible flows with use of models KES, KEFV, and SST.
In addition, the question how to obtain correct characteristics of supersonic turbulent flows using two-parametric turbulence models is considered. The calculations on different grids have shown that specifying a laminar flow at the inlet to the nozzle and wall functions at its surfaces, one obtains the laminar core of the flow up to the fifth Mach disk. In order to obtain correct flow characteristics, it is necessary either to specify two parameters characterizing turbulence of the inflowing gas, or to set a “starting” turbulence in a limited volume enveloping the region of presumable laminar-turbulent transition next to the exit from the nozzle. The latter possibility is implemented in model KEFV.
-
Synthesis of the structure of organised systems as central problem of evolutionary cybernetics
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1103-1124The article provides approaches to evolutionary modelling of synthesis of organised systems and analyses methodological problems of evolutionary computations of this kind. Based on the analysis of works on evolutionary cybernetics, evolutionary theory, systems theory and synergetics, we conclude that there are open problems in formalising the synthesis of organised systems and modelling their evolution. The article emphasises that the theoretical basis for the practice of evolutionary modelling is the principles of the modern synthetic theory of evolution. Our software project uses a virtual computing environment for machine synthesis of problem solving algorithms. In the process of modelling, we obtained the results on the basis of which we conclude that there are a number of conditions that fundamentally limit the applicability of genetic programming methods in the tasks of synthesis of functional structures. The main limitations are the need for the fitness function to track the step-by-step approach to the solution of the problem and the inapplicability of this approach to the problems of synthesis of hierarchically organised systems. We note that the results obtained in the practice of evolutionary modelling in general for the whole time of its existence, confirm the conclusion the possibilities of genetic programming are fundamentally limited in solving problems of synthesizing the structure of organized systems. As sources of fundamental difficulties for machine synthesis of system structures the article points out the absence of directions for gradient descent in structural synthesis and the absence of regularity of random appearance of new organised structures. The considered problems are relevant for the theory of biological evolution. The article substantiates the statement about the biological specificity of practically possible ways of synthesis of the structure of organised systems. As a theoretical interpretation of the discussed problem, we propose to consider the system-evolutionary concept of P.K.Anokhin. The process of synthesis of functional structures in this context is an adaptive response of organisms to external conditions based on their ability to integrative synthesis of memory, needs and information about current conditions. The results of actual studies are in favour of this interpretation. We note that the physical basis of biological integrativity may be related to the phenomena of non-locality and non-separability characteristic of quantum systems. The problems considered in this paper are closely related to the problem of creating strong artificial intelligence.
-
The iterations’ number estimation for strongly polynomial linear programming algorithms
Computer Research and Modeling, 2024, v. 16, no. 2, pp. 249-285A direct algorithm for solving a linear programming problem (LP), given in canonical form, is considered. The algorithm consists of two successive stages, in which the following LP problems are solved by a direct method: a non-degenerate auxiliary problem at the first stage and some problem equivalent to the original one at the second. The construction of the auxiliary problem is based on a multiplicative version of the Gaussian exclusion method, in the very structure of which there are possibilities: identification of incompatibility and linear dependence of constraints; identification of variables whose optimal values are obviously zero; the actual exclusion of direct variables and the reduction of the dimension of the space in which the solution of the original problem is determined. In the process of actual exclusion of variables, the algorithm generates a sequence of multipliers, the main rows of which form a matrix of constraints of the auxiliary problem, and the possibility of minimizing the filling of the main rows of multipliers is inherent in the very structure of direct methods. At the same time, there is no need to transfer information (basis, plan and optimal value of the objective function) to the second stage of the algorithm and apply one of the ways to eliminate looping to guarantee final convergence.
Two variants of the algorithm for solving the auxiliary problem in conjugate canonical form are presented. The first one is based on its solution by a direct algorithm in terms of the simplex method, and the second one is based on solving a problem dual to it by the simplex method. It is shown that both variants of the algorithm for the same initial data (inputs) generate the same sequence of points: the basic solution and the current dual solution of the vector of row estimates. Hence, it is concluded that the direct algorithm is an algorithm of the simplex method type. It is also shown that the comparison of numerical schemes leads to the conclusion that the direct algorithm allows to reduce, according to the cubic law, the number of arithmetic operations necessary to solve the auxiliary problem, compared with the simplex method. An estimate of the number of iterations is given.
-
Optimization of geometric analysis strategy in CAD-systems
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 825-840Computer-aided assembly planning for complex products is an important engineering and scientific problem. The assembly sequence and content of assembly operations largely depend on the mechanical structure and geometric properties of a product. An overview of geometric modeling methods that are used in modern computer-aided design systems is provided. Modeling geometric obstacles in assembly using collision detection, motion planning, and virtual reality is very computationally intensive. Combinatorial methods provide only weak necessary conditions for geometric reasoning. The important problem of minimizing the number of geometric tests during the synthesis of assembly operations and processes is considered. A formalization of this problem is based on a hypergraph model of the mechanical structure of the product. This model provides a correct mathematical description of coherent and sequential assembly operations. The key concept of the geometric situation is introduced. This is a configuration of product parts that requires analysis for freedom from obstacles and this analysis gives interpretable results. A mathematical description of geometric heredity during the assembly of complex products is proposed. Two axioms of heredity allow us to extend the results of testing one geometric situation to many other situations. The problem of minimizing the number of geometric tests is posed as a non-antagonistic game between decision maker and nature, in which it is required to color the vertices of an ordered set in two colors. The vertices represent geometric situations, and the color is a metaphor for the result of a collision-free test. The decision maker’s move is to select an uncolored vertex; nature’s answer is its color. The game requires you to color an ordered set in a minimum number of moves by decision maker. The project situation in which the decision maker makes a decision under risk conditions is discussed. A method for calculating the probabilities of coloring the vertices of an ordered set is proposed. The basic pure strategies of rational behavior in this game are described. An original synthetic criterion for making rational decisions under risk conditions has been developed. Two heuristics are proposed that can be used to color ordered sets of high cardinality and complex structure.
-
Quantile shape measures for heavy-tailed distributions
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1041-1077Currently, journal papers contain numerous examples of the use of heavy-tailed distributions for applied research on various complex systems. Models of extreme data are usually limited to a small set of distribution shapes that in this field of applied research historically been used. It is possible to increase the composition of the set of probability distributions shapes through comparing the measures of the distribution shapes and choosing the most suitable implementations. The example of a beta distribution of the second kind shown that the lack of definability of the moments of heavy-tailed implementations of the beta family of distributions limits the applicability of the existing classical methods of moments for studying the distributions shapes when are characterized heavy tails. For this reason, the development of new methods for comparing distributions based on quantile shape measures free from the restrictions on the shape parameters remains relevant study the possibility of constructing a space of quantile measures of shapes for comparing distributions with heavy tails. The operation purpose consists in computer research of creation possibility of space of the quantile’s measures for the comparing of distributions property with heavy tails. On the basis of computer simulation there the distributions implementations in measures space of shapes were been shown. Mapping distributions in space only of the parametrical measures of shapes has shown that the imposition of regions for heavy tails distribution made impossible compare the shape of distributions belonging to different type in the space of quantile measures of skewness and kurtosis. It is well known that shape information measures such as entropy and entropy uncertainty interval contain additional information about the shape measure of heavy-tailed distributions. In this paper, a quantile entropy coefficient is proposed as an additional independent measure of shape, which is based on the ratio of entropy and quantile uncertainty intervals. Also estimates of quantile entropy coefficients are obtained for a number of well-known heavy-tailed distributions. The possibility of comparing the distributions shapes with realizations of the beta distribution of the second kind is illustrated by the example of the lognormal distribution and the Pareto distribution. Due to mapping the position of stable distributions in the three-dimensional space of quantile measures of shapes estimate made it possible the shape parameters to of the beta distribution of the second kind, for which shape is closest to the Lévy shape. From the paper material it follows that the display of distributions in the three-dimensional space of quantile measures of the forms of skewness, kurtosis and entropy coefficient significantly expands the possibility of comparing the forms for distributions with heavy tails.
-
Numerical solution of the third initial-boundary value problem for the nonstationary heat conduction equation with fractional derivatives
Computer Research and Modeling, 2024, v. 16, no. 6, pp. 1345-1360Recently, to describe various mathematical models of physical processes, fractional differential calculus has been widely used. In this regard, much attention is paid to partial differential equations of fractional order, which are a generalization of partial differential equations of integer order. In this case, various settings are possible.
Loaded differential equations in the literature are called equations containing values of a solution or its derivatives on manifolds of lower dimension than the dimension of the definitional domain of the desired function. Currently, numerical methods for solving loaded partial differential equations of integer and fractional orders are widely used, since analytical solving methods for solving are impossible. A fairly effective method for solving this kind of problem is the finite difference method, or the grid method.
We studied the initial-boundary value problem in the rectangle $\overline{D}=\{(x,\,t)\colon 0\leqslant x\leqslant l,\;0\leqslant t\leqslant T\}$ for the loaded differential heat equation with composition fractional derivative of Riemann – Liouville and Caputo – Gerasimov and with boundary conditions of the first and third kind. We have gotten an a priori assessment in differential and difference interpretations. The obtained inequalities mean the uniqueness of the solution and the continuous dependence of the solution on the input data of the problem. A difference analogue of the composition fractional derivative of Riemann – Liouville and Caputo –Gerasimov order $(2-\beta )$ is obtained and a difference scheme is constructed that approximates the original problem with the order $O\left(\tau +h^{2-\beta } \right)$. The convergence of the approximate solution to the exact one is proven at a rate equal to the order of approximation of the difference scheme.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




