Результаты поиска по 'exponential distribution':
Найдено статей: 7
  1. Poddubny V.V., Polikarpov A.A.
    Dissipative Stochastic Dynamic Model of Language Signs Evolution
    Computer Research and Modeling, 2011, v. 3, no. 2, pp. 103-124

    We offer the dissipative stochastic dynamic model of the language sign evolution, satisfying to the principle of the least action, one of fundamental variational principles of the Nature. The model conjectures the Poisson nature of the birth flow of language signs and the exponential distribution of their associative-semantic potential (ASP). The model works with stochastic difference equations of the special type for dissipative processes. The equation for momentary polysemy distribution and frequency-rank distribution drawn from our model do not differs significantly (by Kolmogorov-Smirnov’s test) from empirical distributions, got from main Russian and English explanatory dictionaries as well as frequency dictionaries of them.

    Views (last year): 1. Citations: 6 (RSCI).
  2. Bulinskaya E.V.
    Isotropic Multidimensional Catalytic Branching Random Walk with Regularly Varying Tails
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1033-1039

    The study completes a series of the author’s works devoted to the spread of particles population in supercritical catalytic branching random walk (CBRW) on a multidimensional lattice. The CBRW model describes the evolution of a system of particles combining their random movement with branching (reproduction and death) which only occurs at fixed points of the lattice. The set of such catalytic points is assumed to be finite and arbitrary. In the supercritical regime the size of population, initiated by a parent particle, increases exponentially with positive probability. The rate of the spread depends essentially on the distribution tails of the random walk jump. If the jump distribution has “light tails”, the “population front”, formed by the particles most distant from the origin, moves linearly in time and the limiting shape of the front is a convex surface. When the random walk jump has independent coordinates with a semiexponential distribution, the population spreads with a power rate in time and the limiting shape of the front is a star-shape nonconvex surface. So far, for regularly varying tails (“heavy” tails), we have considered the problem of scaled front propagation assuming independence of components of the random walk jump. Now, without this hypothesis, we examine an “isotropic” case, when the rate of decay of the jumps distribution in different directions is given by the same regularly varying function. We specify the probability that, for time going to infinity, the limiting random set formed by appropriately scaled positions of population particles belongs to a set $B$ containing the origin with its neighborhood, in $\mathbb{R}^d$. In contrast to the previous results, the random cloud of particles with normalized positions in the time limit will not concentrate on coordinate axes with probability one.

  3. Kozhevnikov V.S., Matyushkin I.V., Chernyaev N.V.
    Analysis of the basic equation of the physical and statistical approach within reliability theory of technical systems
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 721-735

    Verification of the physical-statistical approach within reliability theory for the simplest cases was carried out, which showed its validity. An analytical solution of the one-dimensional basic equation of the physicalstatistical approach is presented under the assumption of a stationary degradation rate. From a mathematical point of view this equation is the well-known continuity equation, where the role of density is played by the density distribution function of goods in its characteristics phase space, and the role of fluid velocity is played by intensity (rate) degradation processes. The latter connects the general formalism with the specifics of degradation mechanisms. The cases of coordinate constant, linear and quadratic degradation rates are analyzed using the characteristics method. In the first two cases, the results correspond to physical intuition. At a constant rate of degradation, the shape of the initial distribution is preserved, and the distribution itself moves equably from the zero. At a linear rate of degradation, the distribution either narrows down to a narrow peak (in the singular limit), or expands, with the maximum shifting to the periphery at an exponentially increasing rate. The distribution form is also saved up to the parameters. For the initial normal distribution, the coordinates of the largest value of the distribution maximum for its return motion are obtained analytically.

    In the quadratic case, the formal solution demonstrates counterintuitive behavior. It consists in the fact that the solution is uniquely defined only on a part of an infinite half-plane, vanishes along with all derivatives on the boundary, and is ambiguous when crossing the boundary. If you continue it to another area in accordance with the analytical solution, it has a two-humped appearance, retains the amount of substance and, which is devoid of physical meaning, periodically over time. If you continue it with zero, then the conservativeness property is violated. The anomaly of the quadratic case is explained, though not strictly, by the analogy of the motion of a material point with an acceleration proportional to the square of velocity. Here we are dealing with a mathematical curiosity. Numerical calculations are given for all cases. Additionally, the entropy of the probability distribution and the reliability function are calculated, and their correlation is traced.

  4. Ignashin I.N., Yarmoshik D.V.
    Modifications of the Frank –Wolfe algorithm in the problem of finding the equilibrium distribution of traffic flows
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 53-68

    The paper presents various modifications of the Frank–Wolfe algorithm in the equilibrium traffic assignment problem. The Beckman model is used as a model for experiments. In this article, first of all, attention is paid to the choice of the direction of the basic step of the Frank–Wolfe algorithm. Algorithms will be presented: Conjugate Frank–Wolfe (CFW), Bi-conjugate Frank–Wolfe (BFW), Fukushima Frank –Wolfe (FFW). Each modification corresponds to different approaches to the choice of this direction. Some of these modifications are described in previous works of the authors. In this article, following algorithms will be proposed: N-conjugate Frank–Wolfe (NFW), Weighted Fukushima Frank–Wolfe (WFFW). These algorithms are some ideological continuation of the BFW and FFW algorithms. Thus, if the first algorithm used at each iteration the last two directions of the previous iterations to select the next direction conjugate to them, then the proposed algorithm NFW is using more than $N$ previous directions. In the case of Fukushima Frank–Wolfe, the average of several previous directions is taken as the next direction. According to this algorithm, a modification WFFW is proposed, which uses a exponential smoothing from previous directions. For comparative analysis, experiments with various modifications were carried out on several data sets representing urban structures and taken from publicly available sources. The relative gap value was taken as the quality metric. The experimental results showed the advantage of algorithms using the previous directions for step selection over the classic Frank–Wolfe algorithm. In addition, an improvement in efficiency was revealed when using more than two conjugate directions. For example, on various datasets, the modification 3FW showed the best convergence. In addition, the proposed modification WFFW often overtook FFW and CFW, although performed worse than NFW.

  5. Beloborodova E.I., Tamm M.V.
    On some properties of short-wave statistics of FOREX time series
    Computer Research and Modeling, 2017, v. 9, no. 4, pp. 657-669

    Financial mathematics is one of the most natural applications for the statistical analysis of time series. Financial time series reflect simultaneous activity of a large number of different economic agents. Consequently, one expects that methods of statistical physics and the theory of random processes can be applied to them.

    In this paper, we provide a statistical analysis of time series of the FOREX currency market. Of particular interest is the comparison of the time series behavior depending on the way time is measured: physical time versus trading time measured in the number of elementary price changes (ticks). The experimentally observed statistics of the time series under consideration (euro–dollar for the first half of 2007 and for 2009 and British pound – dollar for 2007) radically differs depending on the choice of the method of time measurement. When measuring time in ticks, the distribution of price increments can be well described by the normal distribution already on a scale of the order of ten ticks. At the same time, when price increments are measured in real physical time, the distribution of increments continues to differ radically from the normal up to scales of the order of minutes and even hours.

    To explain this phenomenon, we investigate the statistical properties of elementary increments in price and time. In particular, we show that the distribution of time between ticks for all three time series has a long (1-2 orders of magnitude) power-law tails with exponential cutoff at large times. We obtained approximate expressions for the distributions of waiting times for all three cases. Other statistical characteristics of the time series (the distribution of elementary price changes, pair correlation functions for price increments and for waiting times) demonstrate fairly simple behavior. Thus, it is the anomalously wide distribution of the waiting times that plays the most important role in the deviation of the distribution of increments from the normal. As a result, we discuss the possibility of applying a continuous time random walk (CTRW) model to describe the FOREX time series.

    Views (last year): 10.
  6. Nedbailo Y.A., Surchenko A.V., Bychkov I.N.
    Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656

    Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.

    The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.

    The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.

    The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.

    All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.

  7. Garanina O.S., Romanovsky M.Y.
    Experimental investigation of Russian citizens expenses on new cars and a correspondence to their income
    Computer Research and Modeling, 2012, v. 4, no. 3, pp. 621-629

    The question of distribution of citizens expenses in modern Russia is experimentally investigated. New cars were chosen as representative group of the acquired goods as well as earlier. Results of the analysis of sales of new cars for 2007–2009 are presented below. Main “body” of density of probability to find certain number of cars depending on their price, since some initial price up to ~ k$60, is an exponential distribution. The found feature of distribution (unlike 2003–2005) was an existence of minimum price. For expensive cars (distribution “tail”), the asymptotic form is the Pareto distribution with a hyperbole exponent a little greater, than measured earlier for 2003–2005. The results turned up to be similar to direct measurements of distribution of tax declarations on their size, submitted to the USA in 2004 where exponential distribution of the income of citizens, since some minimum, with some asymptotic in the form of Pareto's distribution also was observed.

    Citations: 3 (RSCI).

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"