Результаты поиска по 'distribution function':
Найдено статей: 87
  1. Kotliarova E.V., Krivosheev K.Yu., Gasnikova E.V., Sharovatova Y.I., Shurupov A.V.
    Proof of the connection between the Backman model with degenerate cost functions and the model of stable dynamics
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 335-342

    Since 1950s the field of city transport modelling has progressed rapidly. The first equilibrium distribution models of traffic flow appeared. The most popular model (which is still being widely used) was the Beckmann model, based on the two Wardrop principles. The core of the model could be briefly described as the search for the Nash equilibrium in a population demand game, in which losses of agents (drivers) are calculated based on the chosen path and demands of this path with correspondences being fixed. The demands (costs) of a path are calculated as the sum of the demands of different path segments (graph edges), that are included in the path. The costs of an edge (edge travel time) are determined by the amount of traffic on this edge (more traffic means larger travel time). The flow on a graph edge is determined by the sum of flows over all paths passing through the given edge. Thus, the cost of traveling along a path is determined not only by the choice of the path, but also by the paths other drivers have chosen. Thus, it is a standard game theory task. The way cost functions are constructed allows us to narrow the search for equilibrium to solving an optimization problem (game is potential in this case). If the cost functions are monotone and non-decreasing, the optimization problem is convex. Actually, different assumptions about the cost functions form different models. The most popular model is based on the BPR cost function. Such functions are massively used in calculations of real cities. However, in the beginning of the XXI century, Yu. E. Nesterov and A. de Palma showed that Beckmann-type models have serious weak points. Those could be fixed using the stable dynamics model, as it was called by the authors. The search for equilibrium here could be also reduced to an optimization problem, moreover, the problem of linear programming. In 2013, A.V.Gasnikov discovered that the stable dynamics model can be obtained by a passage to the limit in the Beckmann model. However, it was made only for several practically important, but still special cases. Generally, the question if this passage to the limit is possible remains open. In this paper, we provide the justification of the possibility of the above-mentioned passage to the limit in the general case, when the cost function for traveling along the edge as a function of the flow along the edge degenerates into a function equal to fixed costs until the capacity is reached and it is equal to plus infinity when the capacity is exceeded.

  2. The article deals with the nonlinear boundary-value problem of hydrogen permeability corresponding to the following experiment. A membrane made of the target structural material heated to a sufficiently high temperature serves as the partition in the vacuum chamber. Degassing is performed in advance. A constant pressure of gaseous (molecular) hydrogen is built up at the inlet side. The penetrating flux is determined by mass-spectrometry in the vacuum maintained at the outlet side.

    A linear model of dependence on concentration is adopted for the coefficient of dissolved atomic hydrogen diffusion in the bulk. The temperature dependence conforms to the Arrhenius law. The surface processes of dissolution and sorptiondesorption are taken into account in the form of nonlinear dynamic boundary conditions (differential equations for the dynamics of surface concentrations of atomic hydrogen). The characteristic mathematical feature of the boundary-value problem is that concentration time derivatives are included both in the diffusion equation and in the boundary conditions with quadratic nonlinearity. In terms of the general theory of functional differential equations, this leads to the so-called neutral type equations and requires a more complex mathematical apparatus. An iterative computational algorithm of second-(higher- )order accuracy is suggested for solving the corresponding nonlinear boundary-value problem based on explicit-implicit difference schemes. To avoid solving the nonlinear system of equations at every time step, we apply the explicit component of difference scheme to slower sub-processes.

    The results of numerical modeling are presented to confirm the fitness of the model to experimental data. The degrees of impact of variations in hydrogen permeability parameters (“derivatives”) on the penetrating flux and the concentration distribution of H atoms through the sample thickness are determined. This knowledge is important, in particular, when designing protective structures against hydrogen embrittlement or membrane technologies for producing high-purity hydrogen. The computational algorithm enables using the model in the analysis of extreme regimes for structural materials (pressure drops, high temperatures, unsteady heating), identifying the limiting factors under specific operating conditions, and saving on costly experiments (especially in deuterium-tritium investigations).

  3. The paper develops a new mathematical method of the joint signal and noise calculation at the Rice statistical distribution based on combing the maximum likelihood method and the method of moments. The calculation of the sough-for values of signal and noise is implemented by processing the sampled measurements of the analyzed Rician signal’s amplitude. The explicit equations’ system has been obtained for required signal and noise parameters and the results of its numerical solution are provided confirming the efficiency of the proposed technique. It has been shown that solving the two-parameter task by means of the proposed technique does not lead to the increase of the volume of demanded calculative resources if compared with solving the task in one-parameter approximation. An analytical solution of the task has been obtained for the particular case of small value of the signal-to-noise ratio. The paper presents the investigation of the dependence of the sought for parameters estimation accuracy and dispersion on the quantity of measurements in experimental sample. According to the results of numerical experiments, the dispersion values of the estimated sought-for signal and noise parameters calculated by means of the proposed technique change in inverse proportion to the quantity of measurements in a sample. There has been implemented a comparison of the accuracy of the soughtfor Rician parameters’ estimation by means of the proposed technique and by earlier developed version of the method of moments. The problem having been considered in the paper is meaningful for the purposes of Rician data processing, in particular, at the systems of magnetic-resonance visualization, in devices of ultrasonic visualization, at optical signals’ analysis in range-measuring systems, at radar signals’ analysis, as well as at solving many other scientific and applied tasks that are adequately described by the Rice statistical model.

    Views (last year): 11.
  4. Ketova K.V., Romanovsky Y.M., Rusyak I.G.
    Mathematical modeling of the human capital dynamic
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 329-342

    In the conditions of the development of modern economy, human capital is one of the main factors of economic growth. The formation of human capital begins with the birth of a person and continues throughout life, so the value of human capital is inseparable from its carriers, which in turn makes it difficult to account for this factor. This has led to the fact that currently there are no generally accepted methods of calculating the value of human capital. There are only a few approaches to the measurement of human capital: the cost approach (by income or investment) and the index approach, of which the most well-known approach developed under the auspices of the UN.

    This paper presents the assigned task in conjunction with the task of demographic dynamics solved in the time-age plane, which allows to more fully take into account the temporary changes in the demographic structure on the dynamics of human capital.

    The task of demographic dynamics is posed within the framework of the Mac-Kendrick – von Foerster model on the basis of the equation of age structure dynamics. The form of distribution functions for births, deaths and migration of the population is determined on the basis of the available statistical information. The numerical solution of the problem is given. The analysis and forecast of demographic indicators are presented. The economic and mathematical model of human capital dynamics is formulated on the basis of the demographic dynamics problem. The problem of modeling the human capital dynamics considers three components of capital: educational, health and cultural (spiritual). Description of the evolution of human capital components uses an equation of the transfer equation type. Investments in human capital components are determined on the basis of budget expenditures and private expenditures, taking into account the characteristic time life cycle of demographic elements. A one-dimensional kinetic equation is used to predict the dynamics of the total human capital. The method of calculating the dynamics of this factor is given as a time function. The calculated data on the human capital dynamics are presented for the Russian Federation. As studies have shown, the value of human capital increased rapidly until 2008, in the future there was a period of stabilization, but after 2014 there is a negative dynamics of this value.

    Views (last year): 34.
  5. Nedbailo Y.A., Surchenko A.V., Bychkov I.N.
    Reducing miss rate in a non-inclusive cache with inclusive directory of a chip multiprocessor
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 639-656

    Although the era of exponential performance growth in computer chips has ended, processor core numbers have reached 16 or more even in general-purpose desktop CPUs. As DRAM throughput is unable to keep pace with this computing power growth, CPU designers need to find ways of lowering memory traffic per instruction. The straightforward way to do this is to reduce the miss rate of the last-level cache. Assuming “non-inclusive cache, inclusive directory” (NCID) scheme already implemented, three ways of reducing the cache miss rate further were studied.

    The first is to achieve more uniform usage of cache banks and sets by employing hash-based interleaving and indexing. In the experiments in SPEC CPU2017 refrate tests, even the simplest XOR-based hash functions demonstrated a performance increase of 3.2%, 9.1%, and 8.2% for CPU configurations with 16, 32, and 64 cores and last-level cache banks, comparable to the results of more complex matrix-, division- and CRC-based functions.

    The second optimisation is aimed at reducing replication at different cache levels by means of automatically switching to the exclusive scheme when it appears optimal. A known scheme of this type, FLEXclusion, was modified for use in NCID caches and showed an average performance gain of 3.8%, 5.4 %, and 7.9% for 16-, 32-, and 64-core configurations.

    The third optimisation is to increase the effective cache capacity using compression. The compression rate of the inexpensive and fast BDI*-HL (Base-Delta-Immediate Modified, Half-Line) algorithm, designed for NCID, was measured, and the respective increase in cache capacity yielded roughly 1% of the average performance increase.

    All three optimisations can be combined and demonstrated a performance gain of 7.7%, 16% and 19% for CPU configurations with 16, 32, and 64 cores and banks, respectively.

  6. Romanovsky M.Y., Vidov P.V., Pyrkin V.A.
    Is a tick an elementary jump in a random walks scheme on the stock market?
    Computer Research and Modeling, 2010, v. 2, no. 2, pp. 219-223

    In this paper average times between elementary jumps of stock returns on the Russian market were experimentally studied. Considering the scaling of the probability density function of stock returns on different time intervals it is shown that an elementary jump in the random walks scheme for financial instrument returns is a unit price change (tick) that corresponds to a single deal on the stock market.

    Views (last year): 3. Citations: 1 (RSCI).
  7. Koltsov Y.V., Boboshko E.V.
    Comparative analysis of optimization methods for electrical energy losses interval evaluation problem
    Computer Research and Modeling, 2013, v. 5, no. 2, pp. 231-239

    This article is dedicated to a comparison analysis of optimization methods, in order to perform an interval estimation of electrical energy technical losses in distribution networks of voltage 6–20 kV. The issue of interval evaluation is represented as a multi-dimensional conditional minimization/maximization problem with implicit target function. A number of numerical optimization methods of first and zero orders is observed, with the aim of determining the most suitable for the problem of interest. The desired algorithm is BOBYQA, in which the target function is replaced with its quadratic approximation in some trusted region.

    Views (last year): 2. Citations: 1 (RSCI).
  8. Ekaterinchuk E.D., Ryashko L.B.
    Analysis of stochastic attractors for time-delayed quadratic discrete model of population dynamics
    Computer Research and Modeling, 2015, v. 7, no. 1, pp. 145-157

    We consider a time-delayed quadratic discrete model of population dynamics under the influence of random perturbations. Analysis of stochastic attractors of the model is performed using the methods of direct numerical simulation and the stochastic sensitivity function technique. A deformation of the probability distribution of random states around the stable equilibria and cycles is studied parametrically. The phenomenon of noise-induced transitions in the zone of discrete cycles is demonstrated.

    Views (last year): 3. Citations: 1 (RSCI).
  9. Epifanov A.V., Tsybulin V.G.
    Regarding the dynamics of cosymmetric predator – prey systems
    Computer Research and Modeling, 2017, v. 9, no. 5, pp. 799-813

    To study nonlinear effects of biological species interactions numerical-analytical approach is being developed. The approach is based on the cosymmetry theory accounting for the phenomenon of the emergence of a continuous family of solutions to differential equations where each solution can be obtained from the appropriate initial state. In problems of mathematical ecology the onset of cosymmetry is usually connected with a number of relationships between the parameters of the system. When the relationships collapse families vanish, we get a finite number of isolated solutions instead of a continuum of solutions and transient process can be long-term, dynamics taking place in a neighborhood of a family that has vanished due to cosymmetry collapse.

    We consider a model for spatiotemporal competition of predators or prey with an account for directed migration, Holling type II functional response and nonlinear prey growth function permitting Alley effect. We found out the conditions on system parameters under which there is linear with respect to population densities cosymmetry. It is demonstated that cosymmetry exists for any resource function in case of heterogeneous habitat. Numerical experiment in MATLAB is applied to compute steady states and oscillatory regimes in case of spatial heterogeneity.

    The dynamics of three population interactions (two predators and a prey, two prey and a predator) are considered. The onset of families of stationary distributions and limit cycle branching out of equlibria of a family that lose stability are investigated in case of homogeneous habitat. The study of the system for two prey and a predator gave a wonderful result of species coexistence. We have found out parameter regions where three families of stable solutions can be realized: coexistence of two prey in absence of a predator, stationary and oscillatory distributions of three coexisting species. Cosymmetry collapse is analyzed and long-term transient dynamics leading to solutions with the exclusion of one of prey or extinction of a predator is established in the numerical experiment.

    Views (last year): 12. Citations: 3 (RSCI).
  10. Varshavskiy A.E.
    A model for analyzing income inequality based on a finite functional sequence (adequacy and application problems)
    Computer Research and Modeling, 2022, v. 14, no. 3, pp. 675-689

    The paper considers the adequacy of the model developed earlier by the author for the analysis of income inequality and based on an empirically confirmed hypothesis that the relative (to the income of the richest group) income values of 20% population groups in total income can be represented as a finite functional sequence, each member of which depends on one parameter — a specially defined indicator of inequality. It is shown that in addition to the existing methods of inequality analysis, the model makes it possible to estimate with the help of analytical expressions the income shares of 20%, 10% and smaller groups of the population for different levels of inequality, as well as to identify how they change with the growth of inequality, to estimate the level of inequality for known ratios between the incomes of different groups of the population, etc.

    The paper provides a more detailed confirmation of the proposed model adequacy in comparison with the previously obtained results of statistical analysis of empirical data on the distribution of income between the 20% and 10% population groups. It is based on the analysis of certain ratios between the values of quintiles and deciles according to the proposed model. The verification of these ratios was carried out using a set of data for a large number of countries and the estimates obtained confirm the sufficiently high accuracy of the model.

    Data are presented that confirm the possibility of using the model to analyze the dependence of income distribution by population groups on the level of inequality, as well as to estimate the inequality indicator for income ratios between different groups, including variants when the income of the richest 20% is equal to the income of the poor 60 %, income of the middle class 40% or income of the rest 80% of the population, as well as when the income of the richest 10% is equal to the income of the poor 40 %, 50% or 60%, to the income of various middle class groups, etc., as well as for cases, when the distribution of income obeys harmonic proportions and when the quintiles and deciles corresponding to the middle class reach a maximum. It is shown that the income shares of the richest middle class groups are relatively stable and have a maximum at certain levels of inequality.

    The results obtained with the help of the model can be used to determine the standards for developing a policy of gradually increasing the level of progressive taxation in order to move to the level of inequality typical of countries with social oriented economy.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"