All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Introduction to the theory of complex networks
Computer Research and Modeling, 2010, v. 2, no. 2, pp. 121-141Views (last year): 53. Citations: 107 (RSCI).There was a new direction of studying of the complex systems last years, considering them as networks. Nodes in such networks represent elements of these complex systems, and links between nodes – interactions between elements. These researches deal with real systems, such as biological (metabolic networks of cells, functional networks of a brain, ecological systems), technical (the Internet, WWW, networks of the companies of cellular communication, power grids), social (networks of scientific cooperation, a network of movie actors, a network of acquaintances). It has appeared that these networks have more complex architecture, than classical random networks. In the offered review the basic concepts theory of complex networks are given, and the basic directions of studying of real networks structures are also briefly described.
-
Effective rank of a problem of function estimation based on measurement with an error of finite number of its linear functionals
Computer Research and Modeling, 2014, v. 6, no. 2, pp. 189-202The problem of restoration of an element f of Euclidean functional space L2(X) based on the results of measurements of a finite set of its linear functionals, distorted by (random) error is solved. A priori data aren't assumed. Family of linear subspaces of the maximum (effective) dimension for which the projections of element f to them allow estimates with a given accuracy, is received. The effective rank ρ(δ) of the estimation problem is defined as the function equal to the maximum dimension of an orthogonal component Pf of the element f which can be estimated with a error, which is not surpassed the value δ. The example of restoration of a spectrum of radiation based on a finite set of experimental data is given.
-
The similarity dimension of the random iterated function system
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 681-691Views (last year): 1. Citations: 2 (RSCI).In this paper we consider the properties of the random iterated function systems (RIFS) obtained using a generalization of the Chaos game algorithm. Used for the RIFS simulation R is a free software environment for statistical computing and graphics. The similarity dimension by the polygonal protofractals Z = {zj}, j = 1, 2, . . . , k nonmonotonically depends on the RIFS parameters dS(μ|k) with an extreme value max dS(μ|k)=−ln k/ln(1/(1+μ)).
-
Conditions of Rice statistical model applicability and estimation of the Rician signal’s parameters by maximum likelihood technique
Computer Research and Modeling, 2014, v. 6, no. 1, pp. 13-25Views (last year): 2. Citations: 4 (RSCI).The paper develops a theory of a new so-called two-parametric approach to the random signals' analysis and processing. A mathematical simulation and the task solutions’ comparison have been implemented for the Gauss and Rice statistical models. The applicability of the Rice statistical model is substantiated for the tasks of data and images processing when the signal’s envelope is being analyzed. A technique is developed and theoretically substantiated for solving the task of the noise suppression and initial image reconstruction by means of joint calculation of both statistical parameters — an initial signal’s mean value and noise dispersion — based on the maximum likelihood method within the Rice distribution. The peculiarities of this distribution’s likelihood function and the following from them possibilities of the signal and noise estimation have been analyzed.
-
Synthesis of the structure of organised systems as central problem of evolutionary cybernetics
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1103-1124The article provides approaches to evolutionary modelling of synthesis of organised systems and analyses methodological problems of evolutionary computations of this kind. Based on the analysis of works on evolutionary cybernetics, evolutionary theory, systems theory and synergetics, we conclude that there are open problems in formalising the synthesis of organised systems and modelling their evolution. The article emphasises that the theoretical basis for the practice of evolutionary modelling is the principles of the modern synthetic theory of evolution. Our software project uses a virtual computing environment for machine synthesis of problem solving algorithms. In the process of modelling, we obtained the results on the basis of which we conclude that there are a number of conditions that fundamentally limit the applicability of genetic programming methods in the tasks of synthesis of functional structures. The main limitations are the need for the fitness function to track the step-by-step approach to the solution of the problem and the inapplicability of this approach to the problems of synthesis of hierarchically organised systems. We note that the results obtained in the practice of evolutionary modelling in general for the whole time of its existence, confirm the conclusion the possibilities of genetic programming are fundamentally limited in solving problems of synthesizing the structure of organized systems. As sources of fundamental difficulties for machine synthesis of system structures the article points out the absence of directions for gradient descent in structural synthesis and the absence of regularity of random appearance of new organised structures. The considered problems are relevant for the theory of biological evolution. The article substantiates the statement about the biological specificity of practically possible ways of synthesis of the structure of organised systems. As a theoretical interpretation of the discussed problem, we propose to consider the system-evolutionary concept of P.K.Anokhin. The process of synthesis of functional structures in this context is an adaptive response of organisms to external conditions based on their ability to integrative synthesis of memory, needs and information about current conditions. The results of actual studies are in favour of this interpretation. We note that the physical basis of biological integrativity may be related to the phenomena of non-locality and non-separability characteristic of quantum systems. The problems considered in this paper are closely related to the problem of creating strong artificial intelligence.
-
Modeling time series trajectories using the Liouville equation
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 585-598This paper presents algorithm for modeling set of trajectories of non-stationary time series, based on a numerical scheme for approximating the sample density of the distribution function in a problem with fixed ends, when the initial distribution for a given number of steps transforms into a certain final distribution, so that at each step the semigroup property of solving the Liouville equation is satisfied. The model makes it possible to numerically construct evolving densities of distribution functions during random switching of states of the system generating the original time series.
The main problem is related to the fact that with the numerical implementation of the left-hand differential derivative in time, the solution becomes unstable, but such approach corresponds to the modeling of evolution. An integrative approach is used while choosing implicit stable schemes with “going into the future”, this does not match the semigroup property at each step. If, on the other hand, some real process is being modeled, in which goal-setting presumably takes place, then it is desirable to use schemes that generate a model of the transition process. Such model is used in the future in order to build a predictor of the disorder, which will allow you to determine exactly what state the process under study is going into, before the process really went into it. The model described in the article can be used as a tool for modeling real non-stationary time series.
Steps of the modeling scheme are described further. Fragments corresponding to certain states are selected from a given time series, for example, trends with specified slope angles and variances. Reference distributions of states are compiled from these fragments. Then the empirical distributions of the duration of the system’s stay in the specified states and the duration of the transition time from state to state are determined. In accordance with these empirical distributions, a probabilistic model of the disorder is constructed and the corresponding trajectories of the time series are modeled.
-
Isotropic Multidimensional Catalytic Branching Random Walk with Regularly Varying Tails
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1033-1039The study completes a series of the author’s works devoted to the spread of particles population in supercritical catalytic branching random walk (CBRW) on a multidimensional lattice. The CBRW model describes the evolution of a system of particles combining their random movement with branching (reproduction and death) which only occurs at fixed points of the lattice. The set of such catalytic points is assumed to be finite and arbitrary. In the supercritical regime the size of population, initiated by a parent particle, increases exponentially with positive probability. The rate of the spread depends essentially on the distribution tails of the random walk jump. If the jump distribution has “light tails”, the “population front”, formed by the particles most distant from the origin, moves linearly in time and the limiting shape of the front is a convex surface. When the random walk jump has independent coordinates with a semiexponential distribution, the population spreads with a power rate in time and the limiting shape of the front is a star-shape nonconvex surface. So far, for regularly varying tails (“heavy” tails), we have considered the problem of scaled front propagation assuming independence of components of the random walk jump. Now, without this hypothesis, we examine an “isotropic” case, when the rate of decay of the jumps distribution in different directions is given by the same regularly varying function. We specify the probability that, for time going to infinity, the limiting random set formed by appropriately scaled positions of population particles belongs to a set $B$ containing the origin with its neighborhood, in $\mathbb{R}^d$. In contrast to the previous results, the random cloud of particles with normalized positions in the time limit will not concentrate on coordinate axes with probability one.
Keywords: catalytic branching random walk, spread of population. -
Training and assessment the generalization ability of interpolation methods
Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1023-1031Views (last year): 7. Citations: 5 (RSCI).We investigate machine learning methods with a certain kind of decision rule. In particular, inverse-distance method of interpolation, method of interpolation by radial basis functions, the method of multidimensional interpolation and approximation, based on the theory of random functions, the last method of interpolation is kriging. This paper shows a method of rapid retraining “model” when adding new data to the existing ones. The term “model” means interpolating or approximating function constructed from the training data. This approach reduces the computational complexity of constructing an updated “model” from $O(n^3)$ to $O(n^2)$. We also investigate the possibility of a rapid assessment of generalizing opportunities “model” on the training set using the method of cross-validation leave-one-out cross-validation, eliminating the major drawback of this approach — the necessity to build a new “model” for each element which is removed from the training set.
-
Reduction of decision rule of multivariate interpolation and approximation method in the problem of data classification
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 475-484Views (last year): 5.This article explores a method of machine learning based on the theory of random functions. One of the main problems of this method is that decision rule of a model becomes more complicated as the number of training dataset examples increases. The decision rule of the model is the most probable realization of a random function and it's represented as a polynomial with the number of terms equal to the number of training examples. In this article we will show the quick way of the number of training dataset examples reduction and, accordingly, the complexity of the decision rule. Reducing the number of examples of training dataset is due to the search and removal of weak elements that have little effect on the final form of the decision function, and noise sampling elements. For each $(x_i,y_i)$-th element sample was introduced the concept of value, which is expressed by the deviation of the estimated value of the decision function of the model at the point $x_i$, built without the $i$-th element, from the true value $y_i$. Also we show the possibility of indirect using weak elements in the process of training model without increasing the number of terms in the decision function. At the experimental part of the article, we show how changed amount of data affects to the ability of the method of generalizing in the classification task.
-
Influence of the mantissa finiteness on the accuracy of gradient-free optimization methods
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 259-280Gradient-free optimization methods or zeroth-order methods are widely used in training neural networks, reinforcement learning, as well as in industrial tasks where only the values of a function at a point are available (working with non-analytical functions). In particular, the method of error back propagation in PyTorch works exactly on this principle. There is a well-known fact that computer calculations use heuristics of floating-point numbers, and because of this, the problem of finiteness of the mantissa arises.
In this paper, firstly, we reviewed the most popular methods of gradient approximation: Finite forward/central difference (FFD/FCD), Forward/Central wise component (FWC/CWC), Forward/Central randomization on $l_2$ sphere (FSSG2/CFFG2); secondly, we described current theoretical representations of the noise introduced by the inaccuracy of calculating the function at a point: adversarial noise, random noise; thirdly, we conducted a series of experiments on frequently encountered classes of problems, such as quadratic problem, logistic regression, SVM, to try to determine whether the real nature of machine noise corresponds to the existing theory. It turned out that in reality (at least for those classes of problems that were considered in this paper), machine noise turned out to be something between adversarial noise and random, and therefore the current theory about the influence of the mantissa limb on the search for the optimum in gradient-free optimization problems requires some adjustment.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"