Результаты поиска по 'entropy':
Найдено статей: 18
  1. Polosin V.G.
    Quantile shape measures for heavy-tailed distributions
    Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1041-1077

    Currently, journal papers contain numerous examples of the use of heavy-tailed distributions for applied research on various complex systems. Models of extreme data are usually limited to a small set of distribution shapes that in this field of applied research historically been used. It is possible to increase the composition of the set of probability distributions shapes through comparing the measures of the distribution shapes and choosing the most suitable implementations. The example of a beta distribution of the second kind shown that the lack of definability of the moments of heavy-tailed implementations of the beta family of distributions limits the applicability of the existing classical methods of moments for studying the distributions shapes when are characterized heavy tails. For this reason, the development of new methods for comparing distributions based on quantile shape measures free from the restrictions on the shape parameters remains relevant study the possibility of constructing a space of quantile measures of shapes for comparing distributions with heavy tails. The operation purpose consists in computer research of creation possibility of space of the quantile’s measures for the comparing of distributions property with heavy tails. On the basis of computer simulation there the distributions implementations in measures space of shapes were been shown. Mapping distributions in space only of the parametrical measures of shapes has shown that the imposition of regions for heavy tails distribution made impossible compare the shape of distributions belonging to different type in the space of quantile measures of skewness and kurtosis. It is well known that shape information measures such as entropy and entropy uncertainty interval contain additional information about the shape measure of heavy-tailed distributions. In this paper, a quantile entropy coefficient is proposed as an additional independent measure of shape, which is based on the ratio of entropy and quantile uncertainty intervals. Also estimates of quantile entropy coefficients are obtained for a number of well-known heavy-tailed distributions. The possibility of comparing the distributions shapes with realizations of the beta distribution of the second kind is illustrated by the example of the lognormal distribution and the Pareto distribution. Due to mapping the position of stable distributions in the three-dimensional space of quantile measures of shapes estimate made it possible the shape parameters to of the beta distribution of the second kind, for which shape is closest to the Lévy shape. From the paper material it follows that the display of distributions in the three-dimensional space of quantile measures of the forms of skewness, kurtosis and entropy coefficient significantly expands the possibility of comparing the forms for distributions with heavy tails.

  2. Kozhevnikov V.S., Matyushkin I.V., Chernyaev N.V.
    Analysis of the basic equation of the physical and statistical approach within reliability theory of technical systems
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 721-735

    Verification of the physical-statistical approach within reliability theory for the simplest cases was carried out, which showed its validity. An analytical solution of the one-dimensional basic equation of the physicalstatistical approach is presented under the assumption of a stationary degradation rate. From a mathematical point of view this equation is the well-known continuity equation, where the role of density is played by the density distribution function of goods in its characteristics phase space, and the role of fluid velocity is played by intensity (rate) degradation processes. The latter connects the general formalism with the specifics of degradation mechanisms. The cases of coordinate constant, linear and quadratic degradation rates are analyzed using the characteristics method. In the first two cases, the results correspond to physical intuition. At a constant rate of degradation, the shape of the initial distribution is preserved, and the distribution itself moves equably from the zero. At a linear rate of degradation, the distribution either narrows down to a narrow peak (in the singular limit), or expands, with the maximum shifting to the periphery at an exponentially increasing rate. The distribution form is also saved up to the parameters. For the initial normal distribution, the coordinates of the largest value of the distribution maximum for its return motion are obtained analytically.

    In the quadratic case, the formal solution demonstrates counterintuitive behavior. It consists in the fact that the solution is uniquely defined only on a part of an infinite half-plane, vanishes along with all derivatives on the boundary, and is ambiguous when crossing the boundary. If you continue it to another area in accordance with the analytical solution, it has a two-humped appearance, retains the amount of substance and, which is devoid of physical meaning, periodically over time. If you continue it with zero, then the conservativeness property is violated. The anomaly of the quadratic case is explained, though not strictly, by the analogy of the motion of a material point with an acceleration proportional to the square of velocity. Here we are dealing with a mathematical curiosity. Numerical calculations are given for all cases. Additionally, the entropy of the probability distribution and the reliability function are calculated, and their correlation is traced.

  3. Ivanova A.S., Omelchenko S.S., Kotliarova E.V., Matyukhin V.V.
    Calibration of model parameters for calculating correspondence matrix for Moscow
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 961-978

    In this paper, we consider the problem of restoring the correspondence matrix based on the observations of real correspondences in Moscow. Following the conventional approach [Gasnikov et al., 2013], the transport network is considered as a directed graph whose edges correspond to road sections and the graph vertices correspond to areas that the traffic participants leave or enter. The number of city residents is considered constant. The problem of restoring the correspondence matrix is to calculate all the correspondence from the $i$ area to the $j$ area.

    To restore the matrix, we propose to use one of the most popular methods of calculating the correspondence matrix in urban studies — the entropy model. In our work, which is based on the work [Wilson, 1978], we describe the evolutionary justification of the entropy model and the main idea of the transition to solving the problem of entropy-linear programming (ELP) in calculating the correspondence matrix. To solve the ELP problem, it is proposed to pass to the dual problem. In this paper, we describe several numerical optimization methods for solving this problem: the Sinkhorn method and the Accelerated Sinkhorn method. We provide numerical experiments for the following variants of cost functions: a linear cost function and a superposition of the power and logarithmic cost functions. In these functions, the cost is a combination of average time and distance between areas, which depends on the parameters. The correspondence matrix is calculated for multiple sets of parameters and then we calculate the quality of the restored matrix relative to the known correspondence matrix.

    We assume that the noise in the restored correspondence matrix is Gaussian, as a result, we use the standard deviation as a quality metric. The article provides an overview of gradient-free optimization methods for solving non-convex problems. Since the number of parameters of the cost function is small, we use the grid search method to find the optimal parameters of the cost function. Thus, the correspondence matrix calculated for each set of parameters and then the quality of the restored matrix is evaluated relative to the known correspondence matrix. Further, according to the minimum residual value for each cost function, we determine for which cost function and at what parameter values the restored matrix best describes real correspondence.

  4. Yevin I.A., Komarov V.V., Popova M.S., Marchenko D.K., Samsonova A.J.
    Cities road networks
    Computer Research and Modeling, 2016, v. 8, no. 5, pp. 775-786

    Road network infrastructure is the basis of any urban area. This article compares the structural characteristics (meshedness coefficient, clustering coefficient) road networks of Moscow center (Old Moscow), formed as a result of self-organization and roads near Leninsky Prospekt (postwar Moscow), which was result of cetralized planning. Data for the construction of road networks in the form of graphs taken from the Internet resource OpenStreetMap, allowing to accurately identify the coordinates of the intersections. According to the characteristics of the calculated Moscow road networks areas the cities with road network which have a similar structure to the two Moscow areas was found in foreign publications. Using the dual representation of road networks of centers of Moscow and St. Petersburg, studied the information and cognitive features of navigation in these tourist areas of the two capitals. In the construction of the dual graph of the studied areas were not taken into account the different types of roads (unidirectional or bi-directional traffic, etc), that is built dual graphs are undirected. Since the road network in the dual representation are described by a power law distribution of vertices on the number of edges (scale-free networks), exponents of these distributions were calculated. It is shown that the information complexity of the dual graph of the center of Moscow exceeds the cognitive threshold 8.1 bits, and the same feature for the center of St. Petersburg below this threshold, because the center of St. Petersburg road network was created on the basis of planning and therefore more easy to navigate. In conclusion, using the methods of statistical mechanics (the method of calculating the partition functions) for the road network of some Russian cities the Gibbs entropy were calculated. It was found that with the road network size increasing their entropy decreases. We discuss the problem of studying the evolution of urban infrastructure networks of different nature (public transport, supply , communication networks, etc.), which allow us to more deeply explore and understand the fundamental laws of urbanization.

    Views (last year): 3.
  5. Kotliarova E.V., Gasnikov A.V., Gasnikova E.V., Yarmoshik D.V.
    Finding equilibrium in two-stage traffic assignment model
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 365-379

    Authors describe a two-stage traffic assignment model. It contains of two blocks. The first block consists of a model for calculating a correspondence (demand) matrix, whereas the second block is a traffic assignment model. The first model calculates a matrix of correspondences using a matrix of transport costs (it characterizes the required volumes of movement from one area to another, it is time in this case). To solve this problem, authors propose to use one of the most popular methods of calculating the correspondence matrix in urban studies — the entropy model. The second model describes exactly how the needs for displacement specified by the correspondence matrix are distributed along the possible paths. Knowing the ways of the flows distribution along the paths, it is possible to calculate the cost matrix. Equilibrium in a two-stage model is a fixed point in the sequence of these two models. In practice the problem of finding a fixed point can be solved by the fixed-point iteration method. Unfortunately, at the moment the issue of convergence and estimations of the convergence rate for this method has not been studied quite thoroughly. In addition, the numerical implementation of the algorithm results in many problems. In particular, if the starting point is incorrect, situations may arise where the algorithm requires extremely large numbers to be computed and exceeds the available memory even on the most modern computers. Therefore the article proposes a method for reducing the problem of finding the equilibrium to the problem of the convex non-smooth optimization. Also a numerical method for solving the obtained optimization problem is proposed. Numerical experiments were carried out for both methods of solving the problem. The authors used data for Vladivostok (for this city information from various sources was processed and collected in a new dataset) and two smaller cities in the USA. It was not possible to achieve convergence by the method of fixed-point iteration, whereas the second model for the same dataset demonstrated convergence rate $k^{-1.67}$.

  6. Gasnikov A.V., Kubentayeva M.B.
    Searching stochastic equilibria in transport networks by universal primal-dual gradient method
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345

    We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.

    Views (last year): 28.
  7. Podlipnova I.V., Persiianov M.I., Shvetsov V.I., Gasnikova E.V.
    Transport modeling: averaging price matrices
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 317-327

    This paper considers various approaches to averaging the generalized travel costs calculated for different modes of travel in the transportation network. The mode of transportation is understood to mean both the mode of transport, for example, a car or public transport, and movement without the use of transport, for example, on foot. The task of calculating the trip matrices includes the task of calculating the total matrices, in other words, estimating the total demand for movements by all modes, as well as the task of splitting the matrices according to the mode, also called modal splitting. To calculate trip matrices, gravitational, entropy and other models are used, in which the probability of movement between zones is estimated based on a certain measure of the distance of these zones from each other. Usually, the generalized cost of moving along the optimal path between zones is used as a distance measure. However, the generalized cost of movement differs for different modes of movement. When calculating the total trip matrices, it becomes necessary to average the generalized costs by modes of movement. The averaging procedure is subject to the natural requirement of monotonicity in all arguments. This requirement is not met by some commonly used averaging methods, for example, averaging with weights. The problem of modal splitting is solved by applying the methods of discrete choice theory. In particular, within the framework of the theory of discrete choice, correct methods have been developed for averaging the utility of alternatives that are monotonic in all arguments. The authors propose some adaptation of the methods of the theory of discrete choice for application to the calculation of the average cost of movements in the gravitational and entropy models. The transfer of averaging formulas from the context of the modal splitting model to the trip matrix calculation model requires the introduction of new parameters and the derivation of conditions for the possible value of these parameters, which was done in this article. The issues of recalibration of the gravitational function, which is necessary when switching to a new averaging method, if the existing function is calibrated taking into account the use of the weighted average cost, were also considered. The proposed methods were implemented on the example of a small fragment of the transport network. The results of calculations are presented, demonstrating the advantage of the proposed methods.

  8. Minkevich I.G.
    On the kinetics of entropy of a system with discrete microscopic states
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1207-1236

    An isolated system, which possesses a discrete set of microscopic states, is considered. The system performs spontaneous random transitions between the microstates. Kinetic equations for the probabilities of the system staying in various microstates are formulated. A general dimensionless expression for entropy of such a system, which depends on the probability distribution, is considered. Two problems are stated: 1) to study the effect of possible unequal probabilities of different microstates, in particular, when the system is in its internal equilibrium, on the system entropy value, and 2) to study the kinetics of microstate probability distribution and entropy evolution of the system in nonequilibrium states. The kinetics for the rates of transitions between the microstates is assumed to be first-order. Two variants of the effects of possible nonequiprobability of the microstates are considered: i) the microstates form two subgroups the probabilities of which are similar within each subgroup but differ between the subgroups, and ii) the microstate probabilities vary arbitrarily around the point at which they are all equal. It is found that, under a fixed total number of microstates, the deviations of entropy from the value corresponding to the equiprobable microstate distribution are extremely small. The latter is a rigorous substantiation of the known hypothesis about the equiprobability of microstates under the thermodynamic equilibrium. On the other hand, based on several characteristic examples, it is shown that the structure of random transitions between the microstates exerts a considerable effect on the rate and mode of the establishment of the system internal equilibrium, on entropy time dependence and expression of the entropy production rate. Under definite schemes of these transitions, there are possibilities of fast and slow components in the transients and of the existence of transients in the form of damped oscillations. The condition of universality and stability of equilibrium microstate distribution is that for any pair of microstates, a sequence of transitions should exist, which provides the passage from one microstate to next, and, consequently, any microstate traps should be absent.

  9. Lyubushin A.A., Farkov Y.A.
    Synchronous components of financial time series
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 639-655

    The article proposes a method of joint analysis of multidimensional financial time series based on the evaluation of the set of properties of stock quotes in a sliding time window and the subsequent averaging of property values for all analyzed companies. The main purpose of the analysis is to construct measures of joint behavior of time series reacting to the occurrence of a synchronous or coherent component. The coherence of the behavior of the characteristics of a complex system is an important feature that makes it possible to evaluate the approach of the system to sharp changes in its state. The basis for the search for precursors of sharp changes is the general idea of increasing the correlation of random fluctuations of the system parameters as it approaches the critical state. The increments in time series of stock values have a pronounced chaotic character and have a large amplitude of individual noises, against which a weak common signal can be detected only on the basis of its correlation in different scalar components of a multidimensional time series. It is known that classical methods of analysis based on the use of correlations between neighboring samples are ineffective in the processing of financial time series, since from the point of view of the correlation theory of random processes, increments in the value of shares formally have all the attributes of white noise (in particular, the “flat spectrum” and “delta-shaped” autocorrelation function). In connection with this, it is proposed to go from analyzing the initial signals to examining the sequences of their nonlinear properties calculated in time fragments of small length. As such properties, the entropy of the wavelet coefficients is used in the decomposition into the Daubechies basis, the multifractal parameters and the autoregressive measure of signal nonstationarity. Measures of synchronous behavior of time series properties in a sliding time window are constructed using the principal component method, moduli values of all pairwise correlation coefficients, and a multiple spectral coherence measure that is a generalization of the quadratic coherence spectrum between two signals. The shares of 16 large Russian companies from the beginning of 2010 to the end of 2016 were studied. Using the proposed method, two synchronization time intervals of the Russian stock market were identified: from mid-December 2013 to mid- March 2014 and from mid-October 2014 to mid-January 2016.

    Views (last year): 12. Citations: 2 (RSCI).
  10. Lyubushin A.A., Kopylova G.N., Kasimova V.A., Taranova L.N.
    Multifractal and entropy statistics of seismic noise in Kamchatka in connection with the strongest earthquakes
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1507-1521

    The study of the properties of seismic noise in Kamchatka is based on the idea that noise is an important source of information about the processes preceding strong earthquakes. The hypothesis is considered that an increase in seismic hazard is accompanied by a simplification of the statistical structure of seismic noise and an increase in spatial correlations of its properties. The entropy of the distribution of squared wavelet coefficients, the width of the carrier of the multifractal singularity spectrum, and the Donoho – Johnstone index were used as statistics characterizing noise. The values of these parameters reflect the complexity: if a random signal is close in its properties to white noise, then the entropy is maximum, and the other two parameters are minimum. The statistics used are calculated for 6 station clusters. For each station cluster, daily median noise properties are calculated in successive 1-day time windows, resulting in an 18-dimensional (3 properties and 6 station clusters) time series of properties. To highlight the general properties of changes in noise parameters, a principal component method is used, which is applied for each cluster of stations, as a result of which the information is compressed into a 6-dimensional daily time series of principal components. Spatial noise coherences are estimated as a set of maximum pairwise quadratic coherence spectra between the principal components of station clusters in a sliding time window of 365 days. By calculating histograms of the distribution of cluster numbers in which the minimum and maximum values of noise statistics are achieved in a sliding time window of 365 days in length, the migration of seismic hazard areas was assessed in comparison with strong earthquakes with a magnitude of at least 7.

Pages: next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"