All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Mathematical model and computer analysis of tests for homogeneity of “dose–effect” dependence
Computer Research and Modeling, 2012, v. 4, no. 2, pp. 267-273Views (last year): 6.The given work is devoted to the comparison of two tests for homogeneity: chi-square test based on contingency tables of 2 × 2 and test for homogeneity based on asymptotic distributions of the summarized square error of a distribution function estimators in the model of ”dose–effect” dependence. The evaluation of test power is performed by means of computer simulation. In order to design efficiency functions the method of kernel regression estimator based on Nadaray–Watson estimator is used.
-
The algorithm of the method for calculating quality classes’ boundaries for quantitative systems’ characteristics and for determination of interactions between characteristics. Part 1. Calculation for two quality classes
Computer Research and Modeling, 2016, v. 8, no. 1, pp. 19-36Views (last year): 1. Citations: 6 (RSCI).A calculation method for boundaries of quality classes for quantitative systems characteristics of any nature is suggested. The method allows to determine interactions which are not detectable using correlation and regression analysis; quality classes’ boundaries of systems’ condition indicator and boundaries of the factors influencing this condition; contribution of the factors to a degree of «inadmissibility» of indicator values; sufficiency of the program observing the factors to describe the causes of «inadmissibility» of indicator values.
-
Noise removal from images using the proposed three-term conjugate gradient algorithm
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 841-853Conjugate gradient algorithms represent an important class of unconstrained optimization algorithms with strong local and global convergence properties and simple memory requirements. These algorithms have advantages that place them between the steep regression method and Newton’s algorithm because they require calculating the first derivatives only and do not require calculating and storing the second derivatives that Newton’s algorithm needs. They are also faster than the steep descent algorithm, meaning that they have overcome the slow convergence of this algorithm, and it does not need to calculate the Hessian matrix or any of its approximations, so it is widely used in optimization applications. This study proposes a novel method for image restoration by fusing the convex combination method with the hybrid (CG) method to create a hybrid three-term (CG) algorithm. Combining the features of both the Fletcher and Revees (FR) conjugate parameter and the hybrid Fletcher and Revees (FR), we get the search direction conjugate parameter. The search direction is the result of concatenating the gradient direction, the previous search direction, and the gradient from the previous iteration. We have shown that the new algorithm possesses the properties of global convergence and descent when using an inexact search line, relying on the standard Wolfe conditions, and using some assumptions. To guarantee the effectiveness of the suggested algorithm and processing image restoration problems. The numerical results of the new algorithm show high efficiency and accuracy in image restoration and speed of convergence when used in image restoration problems compared to Fletcher and Revees (FR) and three-term Fletcher and Revees (TTFR).
-
Four-factor computing experiment for the random walk on a two-dimensional square field
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 905-918Views (last year): 21.Nowadays the random search became a widespread and effective tool for solving different complex optimization and adaptation problems. In this work, the problem of an average duration of a random search for one object by another is regarded, depending on various factors on a square field. The problem solution was carried out by holding total experiment with 4 factors and orthogonal plan with 54 lines. Within each line, the initial conditions and the cellular automaton transition rules were simulated and the duration of the search for one object by another was measured. As a result, the regression model of average duration of a random search for an object depending on the four factors considered, specifying the initial positions of two objects, the conditions of their movement and detection is constructed. The most significant factors among the factors considered in the work that determine the average search time are determined. An interpretation is carried out in the problem of random search for an object from the constructed model. The important result of the work is that the qualitative and quantitative influence of initial positions of objects, the size of the lattice and the transition rules on the average duration of search is revealed by means of model obtained. It is shown that the initial neighborhood of objects on the lattice does not guarantee a quick search, if each of them moves. In addition, it is quantitatively estimated how many times the average time of searching for an object can increase or decrease with increasing the speed of the searching object by 1 unit, and also with increasing the field size by 1 unit, with different initial positions of the two objects. The exponential nature of the growth in the number of steps for searching for an object with an increase in the lattice size for other fixed factors is revealed. The conditions for the greatest increase in the average search duration are found: the maximum distance of objects in combination with the immobility of one of them when the field size is changed by 1 unit. (that is, for example, with $4 \times 4$ at $5 \times 5$) can increase the average search duration in $e^{1.69} \approx 5.42$. The task presented in the work may be relevant from the point of view of application both in the landmark for ensuring the security of the state, and, for example, in the theory of mass service.
-
Influence of the mantissa finiteness on the accuracy of gradient-free optimization methods
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 259-280Gradient-free optimization methods or zeroth-order methods are widely used in training neural networks, reinforcement learning, as well as in industrial tasks where only the values of a function at a point are available (working with non-analytical functions). In particular, the method of error back propagation in PyTorch works exactly on this principle. There is a well-known fact that computer calculations use heuristics of floating-point numbers, and because of this, the problem of finiteness of the mantissa arises.
In this paper, firstly, we reviewed the most popular methods of gradient approximation: Finite forward/central difference (FFD/FCD), Forward/Central wise component (FWC/CWC), Forward/Central randomization on $l_2$ sphere (FSSG2/CFFG2); secondly, we described current theoretical representations of the noise introduced by the inaccuracy of calculating the function at a point: adversarial noise, random noise; thirdly, we conducted a series of experiments on frequently encountered classes of problems, such as quadratic problem, logistic regression, SVM, to try to determine whether the real nature of machine noise corresponds to the existing theory. It turned out that in reality (at least for those classes of problems that were considered in this paper), machine noise turned out to be something between adversarial noise and random, and therefore the current theory about the influence of the mantissa limb on the search for the optimum in gradient-free optimization problems requires some adjustment.
-
A New Method For Point Estimating Parameters Of Simple Regression
Computer Research and Modeling, 2014, v. 6, no. 1, pp. 57-77Views (last year): 2. Citations: 4 (RSCI).A new method is described for finding parameters of univariate regression model: the greatest cosine method. Implementation of the method involves division of regression model parameters into two groups. The first group of parameters responsible for the angle between the experimental data vector and the regression model vector are defined by the maximum of the cosine of the angle between these vectors. The second group includes the scale factor. It is determined by means of “straightening” the relationship between the experimental data vector and the regression model vector. The interrelation of the greatest cosine method with the method of least squares is examined. Efficiency of the method is illustrated by examples.
-
Modified Gauss–Newton method for solving a smooth system of nonlinear equations
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 697-723In this paper, we introduce a new version of Gauss–Newton method for solving a system of nonlinear equations based on ideas of the residual upper bound for a system of nonlinear equations and a quadratic regularization term. The introduced Gauss–Newton method in practice virtually forms the whole parameterized family of the methods solving systems of nonlinear equations and regression problems. The developed family of Gauss–Newton methods completely consists of iterative methods with generalization for cases of non-euclidean normed spaces, including special forms of Levenberg–Marquardt algorithms. The developed methods use the local model based on a parameterized proximal mapping allowing us to use an inexact oracle of «black–box» form with restrictions for the computational precision and computational complexity. We perform an efficiency analysis including global and local convergence for the developed family of methods with an arbitrary oracle in terms of iteration complexity, precision and complexity of both local model and oracle, problem dimensionality. We present global sublinear convergence rates for methods of the proposed family for solving a system of nonlinear equations, consisting of Lipschitz smooth functions. We prove local superlinear convergence under extra natural non-degeneracy assumptions for system of nonlinear functions. We prove both local and global linear convergence for a system of nonlinear equations under Polyak–Lojasiewicz condition for proposed Gauss– Newton methods. Besides theoretical justifications of methods we also consider practical implementation issues. In particular, for conducted experiments we present effective computational schemes for the exact oracle regarding to the dimensionality of a problem. The proposed family of methods unites several existing and frequent in practice Gauss–Newton method modifications, allowing us to construct a flexible and convenient method implementable using standard convex optimization and computational linear algebra techniques.
-
Searching for connections between biological and physico-chemical characteristics of Rybinsk reservoir ecosystem. Part 1. Criteria of connection nonrandomness
Computer Research and Modeling, 2013, v. 5, no. 1, pp. 83-105Views (last year): 3. Citations: 6 (RSCI).Based on contents of phytoplankton pigments, fluorescence samples and some physico-chemical characteristics of the Rybinsk reservoir waters, searching for connections between biological and physicalchemical characteristics is working out. The standard methods of statistical analysis (correlation, regression), methods of description of connection between qualitative classes of characteristics, based on deviation of the studied characteristics distribution from independent distribution, are studied. A method of searching for boundaries of quality classes by criterion of maximum connection coefficient is offered.
-
The optimization approach to simulation modeling of microstructures
Computer Research and Modeling, 2013, v. 5, no. 4, pp. 597-606Views (last year): 4. Citations: 7 (RSCI).The paper presents an optimization approach to microstructure simulation. Porosity function was optimized by numerical method, grain-size model was optimized by complex method based on criteria of model quality. Methods have been validated on examples. Presented new regression model of model quality. Actual application of proposed method is 3D reconstruction of core sample microstructure. Presented results suggest to prolongation of investigations.
-
Forecasting methods and models of disease spread
Computer Research and Modeling, 2013, v. 5, no. 5, pp. 863-882Views (last year): 71. Citations: 19 (RSCI).The number of papers addressing the forecasting of the infectious disease morbidity is rapidly growing due to accumulation of available statistical data. This article surveys the major approaches for the shortterm and the long-term morbidity forecasting. Their limitations and the practical application possibilities are pointed out. The paper presents the conventional time series analysis methods — regression and autoregressive models; machine learning-based approaches — Bayesian networks and artificial neural networks; case-based reasoning; filtration-based techniques. The most known mathematical models of infectious diseases are mentioned: classical equation-based models (deterministic and stochastic), modern simulation models (network and agent-based).
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"