All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Variance reduction for minimax problems with a small dimension of one of the variables
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 257-275The paper is devoted to convex-concave saddle point problems where the objective is a sum of a large number of functions. Such problems attract considerable attention of the mathematical community due to the variety of applications in machine learning, including adversarial learning, adversarial attacks and robust reinforcement learning, to name a few. The individual functions in the sum usually represent losses related to examples from a data set. Additionally, the formulation admits a possibly nonsmooth composite term. Such terms often reflect regularization in machine learning problems. We assume that the dimension of one of the variable groups is relatively small (about a hundred or less), and the other one is large. This case arises, for example, when one considers the dual formulation for a minimization problem with a moderate number of constraints. The proposed approach is based on using Vaidya’s cutting plane method to minimize with respect to the outer block of variables. This optimization algorithm is especially effective when the dimension of the problem is not very large. An inexact oracle for Vaidya’s method is calculated via an approximate solution of the inner maximization problem, which is solved by the accelerated variance reduced algorithm Katyusha. Thus, we leverage the structure of the problem to achieve fast convergence. Separate complexity bounds for gradients of different components with respect to different variables are obtained in the study. The proposed approach is imposing very mild assumptions about the objective. In particular, neither strong convexity nor smoothness is required with respect to the low-dimensional variable group. The number of steps of the proposed algorithm as well as the arithmetic complexity of each step explicitly depend on the dimensionality of the outer variable, hence the assumption that it is relatively small.
-
Modern ways to overcome neural networks catastrophic forgetting and empirical investigations on their structural issues
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 45-56This paper presents the results of experimental validation of some structural issues concerning the practical use of methods to overcome catastrophic forgetting of neural networks. A comparison of current effective methods like EWC (Elastic Weight Consolidation) and WVA (Weight Velocity Attenuation) is made and their advantages and disadvantages are considered. It is shown that EWC is better for tasks where full retention of learned skills is required on all the tasks in the training queue, while WVA is more suitable for sequential tasks with very limited computational resources, or when reuse of representations and acceleration of learning from task to task is required rather than exact retention of the skills. The attenuation of the WVA method must be applied to the optimization step, i. e. to the increments of neural network weights, rather than to the loss function gradient itself, and this is true for any gradient optimization method except the simplest stochastic gradient descent (SGD). The choice of the optimal weights attenuation function between the hyperbolic function and the exponent is considered. It is shown that hyperbolic attenuation is preferable because, despite comparable quality at optimal values of the hyperparameter of the WVA method, it is more robust to hyperparameter deviations from the optimal value (this hyperparameter in the WVA method provides a balance between preservation of old skills and learning a new skill). Empirical observations are presented that support the hypothesis that the optimal value of this hyperparameter does not depend on the number of tasks in the sequential learning queue. And, consequently, this hyperparameter can be picked up on a small number of tasks and used on longer sequences.
-
Fast adaptive by constants of strong-convexity and Lipschitz for gradient first order methods
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 947-963The work is devoted to the construction of efficient and applicable to real tasks first-order methods of convex optimization, that is, using only values of the target function and its derivatives. Construction uses OGMG, fast gradient method which is optimal by complexity, but requires to know the Lipschitz constant for gradient and the strong convexity constant to determine the number of steps and step length. This requirement makes practical usage very hard. An adaptive on the constant for strong convexity algorithm ACGM is proposed, based on restarts of the OGM-G with update of the strong convexity constant estimate, and an adaptive on the Lipschitz constant for gradient ALGM, in which the use of OGM-G restarts is supplemented by the selection of the Lipschitz constant with verification of the smoothness conditions used in the universal gradient descent method. This eliminates the disadvantages of the original method associated with the need to know these constants, which makes practical usage possible. Optimality of estimates for the complexity of the constructed algorithms is proved. To verify the results obtained, experiments on model functions and real tasks from machine learning are carried out.
-
First-order optimization methods are workhorses in a wide range of modern applications in economics, physics, biology, machine learning, control, and other fields. Among other first-order methods accelerated and momentum ones obtain special attention because of their practical efficiency. The heavy-ball method (HB) is one of the first momentum methods. The method was proposed in 1964 and the first analysis was conducted for quadratic strongly convex functions. Since then a number of variations of HB have been proposed and analyzed. In particular, HB is known for its simplicity in implementation and its performance on nonconvex problems. However, as other momentum methods, it has nonmonotone behavior, and for optimal parameters, the method suffers from the so-called peak effect. To address this issue, in this paper, we consider an averaged version of the heavy-ball method (AHB). We show that for quadratic problems AHB has a smaller maximal deviation from the solution than HB. Moreover, for general convex and strongly convex functions, we prove non-accelerated rates of global convergence of AHB, its weighted version WAHB, and for AHB with restarts R-AHB. To the best of our knowledge, such guarantees for HB with averaging were not explicitly proven for strongly convex problems in the existing works. Finally, we conduct several numerical experiments on minimizing quadratic and nonquadratic functions to demonstrate the advantages of using averaging for HB. Moreover, we also tested one more modification of AHB called the tail-averaged heavy-ball method (TAHB). In the experiments, we observed that HB with a properly adjusted averaging scheme converges faster than HB without averaging and has smaller oscillations.
-
Machine learning interpretation of inter-well radiowave survey data
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 675-684Views (last year): 3.Traditional geological search methods going to be ineffective. The exploration depth of kimberlite bodies and ore deposits has increased significantly. The only direct exploration method is to drill a system of wells to the depths that provide access to the enclosing rocks. Due to the high cost of drilling, the role of inter-well survey methods has increased. They allows to increase the mean well spacing without significantly reducing the kimberlite or ore body missing probability. The method of inter-well radio wave survey is effective to search for high contrast conductivity objects. The physics of the method based on the dependence of the electromagnetic wave propagation on the propagation medium conductivity. The source and receiver of electromagnetic radiation is an electric dipole, they are placed in adjacent wells. The distance between the source and receiver is known. Therefore we could estimate the medium absorption coefficient by the rate of radio wave amplitude decrease. Low electrical resistance rocks corresponds to high absorption of radio waves. The inter-well measurement data allows to estimate an effective electrical resistance (or conductivity) of the rock. Typically, the source and receiver are immersed in adjacent wells synchronously. The value of the of the electric field amplitude measured at the receiver site allows to estimate the average value of the attenuation coefficient on the line connecting the source and receiver. The measurements are taken during stops, approximately every 5 m. The distance between stops is much less than the distance between adjacent wells. This leads to significant spatial anisotropy in the measured data distribution. Drill grid covers a large area, and our point is to build a three-dimensional model of the distribution of the electrical properties of the inter-well space throughout the whole area. The anisotropy of spatial distribution makes hard to the use of standard geostatistics approach. To build a three-dimensional model of attenuation coefficient, we used one of machine learning theory methods, the method of nearest neighbors. In this method, the value of the absorption coefficient at a given point is calculated by k nearest measurements. The number k should be determined from additional reasons. The spatial distribution anisotropy effect can be reduced by changing the spatial scale in the horizontal direction. The scale factor λ is one yet external parameter of the problem. To select the parameters k and λ values we used the determination coefficient. To demonstrate the absorption coefficient three-dimensional image construction we apply the procedure to the inter-well radio wave survey data. The data was obtained at one of the sites in Yakutia.
-
On the relations of stochastic convex optimization problems with empirical risk minimization problems on p-norm balls
Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the p-norms. We also explore the question of how the parameter p affects the estimates of the required number of terms as a function of empirical risk.
In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of γ-growth of the objective function. When γ=1, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.
-
Review of algorithmic solutions for deployment of neural networks on lite devices
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1601-1619In today’s technology-driven world, lite devices like Internet of Things (IoT) devices and microcontrollers (MCUs) are becoming increasingly common. These devices are more energyefficient and affordable, often with reduced features compared to the standard versions such as very limited memory and processing power for typical machine learning models. However, modern machine learning models can have millions of parameters, resulting in a large memory footprint. This complexity not only makes it difficult to deploy these large models on resource constrained devices but also increases the risk of latency and inefficiency in processing, which is crucial in some cases where real-time responses are required such as autonomous driving and medical diagnostics. In recent years, neural networks have seen significant advancements in model optimization techniques that help deployment and inference on these small devices. This narrative review offers a thorough examination of the progression and latest developments in neural network optimization, focusing on key areas such as quantization, pruning, knowledge distillation, and neural architecture search. It examines how these algorithmic solutions have progressed and how new approaches have improved upon the existing techniques making neural networks more efficient. This review is designed for machine learning researchers, practitioners, and engineers who may be unfamiliar with these methods but wish to explore the available techniques. It highlights ongoing research in optimizing networks for achieving better performance, lowering energy consumption, and enabling faster training times, all of which play an important role in the continued scalability of neural networks. Additionally, it identifies gaps in current research and provides a foundation for future studies, aiming to enhance the applicability and effectiveness of existing optimization strategies.
-
Analysis of the physics-informed neural network approach to solving ordinary differential equations
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1621-1636Considered the application of physics-informed neural networks using multi layer perceptrons to solve Cauchy initial value problems in which the right-hand sides of the equation are continuous monotonically increasing, decreasing or oscillating functions. With the use of the computational experiments the influence of the construction of the approximate neural network solution, neural network structure, optimization algorithm and software implementation means on the learning process and the accuracy of the obtained solution is studied. The analysis of the efficiency of the most frequently used machine learning frameworks in software development with the programming languages Python and C# is carried out. It is shown that the use of C# language allows to reduce the time of neural networks training by 20–40%. The choice of different activation functions affects the learning process and the accuracy of the approximate solution. The most effective functions in the considered problems are sigmoid and hyperbolic tangent. The minimum of the loss function is achieved at the certain number of neurons of the hidden layer of a single-layer neural network for a fixed training time of the neural network model. It’s also mentioned that the complication of the network structure increasing the number of neurons does not improve the training results. At the same time, the size of the grid step between the points of the training sample, providing a minimum of the loss function, is almost the same for the considered Cauchy problems. Training single-layer neural networks, the Adam method and its modifications are the most effective to solve the optimization problems. Additionally, the application of twoand three-layer neural networks is considered. It is shown that in these cases it is reasonable to use the LBFGS algorithm, which, in comparison with the Adam method, in some cases requires much shorter training time achieving the same solution accuracy. The specificity of neural network training for Cauchy problems in which the solution is an oscillating function with monotonically decreasing amplitude is also investigated. For these problems, it is necessary to construct a neural network solution with variable weight coefficient rather than with constant one, which improves the solution in the grid cells located near by the end point of the solution interval.
-
Forecasting methods and models of disease spread
Computer Research and Modeling, 2013, v. 5, no. 5, pp. 863-882Views (last year): 71. Citations: 19 (RSCI).The number of papers addressing the forecasting of the infectious disease morbidity is rapidly growing due to accumulation of available statistical data. This article surveys the major approaches for the shortterm and the long-term morbidity forecasting. Their limitations and the practical application possibilities are pointed out. The paper presents the conventional time series analysis methods — regression and autoregressive models; machine learning-based approaches — Bayesian networks and artificial neural networks; case-based reasoning; filtration-based techniques. The most known mathematical models of infectious diseases are mentioned: classical equation-based models (deterministic and stochastic), modern simulation models (network and agent-based).
-
Neuro-fuzzy model of fuzzy rules formation for objects state evaluation in conditions of uncertainty
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 477-492Views (last year): 12.This article solves the problem of constructing a neuro-fuzzy model of fuzzy rules formation and using them for objects state evaluation in conditions of uncertainty. Traditional mathematical statistics or simulation modeling methods do not allow building adequate models of objects in the specified conditions. Therefore, at present, the solution of many problems is based on the use of intelligent modeling technologies applying fuzzy logic methods. The traditional approach of fuzzy systems construction is associated with an expert attraction need to formulate fuzzy rules and specify the membership functions used in them. To eliminate this drawback, the automation of fuzzy rules formation, based on the machine learning methods and algorithms, is relevant. One of the approaches to solve this problem is to build a fuzzy neural network and train it on the data characterizing the object under study. This approach implementation required fuzzy rules type choice, taking into account the processed data specificity. In addition, it required logical inference algorithm development on the rules of the selected type. The algorithm steps determine the number and functionality of layers in the fuzzy neural network structure. The fuzzy neural network training algorithm developed. After network training the formation fuzzyproduction rules system is carried out. Based on developed mathematical tool, a software package has been implemented. On its basis, studies to assess the classifying ability of the fuzzy rules being formed have been conducted using the data analysis example from the UCI Machine Learning Repository. The research results showed that the formed fuzzy rules classifying ability is not inferior in accuracy to other classification methods. In addition, the logic inference algorithm on fuzzy rules allows successful classification in the absence of a part of the initial data. In order to test, to solve the problem of assessing oil industry water lines state fuzzy rules were generated. Based on the 303 water lines initial data, the base of 342 fuzzy rules was formed. Their practical approbation has shown high efficiency in solving the problem.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"