Результаты поиска по 'parameter estimation':
Найдено статей: 89
  1. Silaeva V.A., Silaeva M.V., Silaev A.M.
    Estimation of models parameters for time series with Markov switching regimes
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 903-918

    The paper considers the problem of estimating the parameters of time series described by regression models with Markov switching of two regimes at random instants of time with independent Gaussian noise. For the solution, we propose a variant of the EM algorithm based on the iterative procedure, during which an estimation of the regression parameters is performed for a given sequence of regime switching and an evaluation of the switching sequence for the given parameters of the regression models. In contrast to the well-known methods of estimating regression parameters in the models with Markov switching, which are based on the calculation of a posteriori probabilities of discrete states of the switching sequence, in the paper the estimates are calculated of the switching sequence, which are optimal by the criterion of the maximum of a posteriori probability. As a result, the proposed algorithm turns out to be simpler and requires less calculations. Computer modeling allows to reveal the factors influencing accuracy of estimation. Such factors include the number of observations, the number of unknown regression parameters, the degree of their difference in different modes of operation, and the signal-to-noise ratio which is associated with the coefficient of determination in regression models. The proposed algorithm is applied to the problem of estimating parameters in regression models for the rate of daily return of the RTS index, depending on the returns of the S&P 500 index and Gazprom shares for the period from 2013 to 2018. Comparison of the estimates of the parameters found using the proposed algorithm is carried out with the estimates that are formed using the EViews econometric package and with estimates of the ordinary least squares method without taking into account regimes switching. The account of regimes switching allows to receive more exact representation about structure of a statistical dependence of investigated variables. In switching models, the increase in the signal-to-noise ratio leads to the fact that the differences in the estimates produced by the proposed algorithm and using the EViews program are reduced.

    Views (last year): 36.
  2. Neverova G.P., Zhdanova O.L., Kolbina E.A., Abakumov A.I.
    A plankton community: a zooplankton effect in phytoplankton dynamics
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 751-768

    The paper uses methods of mathematical modeling to estimate a zooplankton influence on the dynamics of phytoplankton abundance. We propose a three-component model of the “phytoplankton–zooplankton” community with discrete time, considering a heterogeneity of zooplankton according to the developmental stage and type of feeding; the model takes into account cannibalism in zooplankton community, during which mature individuals of some of its species consume juvenile ones. Survival rates at the early stages of zooplankton life cycle depend explicitly on the interaction between zooplankton and phytoplankton. Loss of phytoplankton biomass because of zooplankton consumption is explicitly considered. We use the Holling functional response of type II to describe saturation during biomass consumption. The dynamics of the phytoplankton community is represented by the Ricker model, which allows to take into account the restriction of phytoplankton biomass growth by the availability of external resources (mineral nutrition, oxygen, light, etc.) implicitly.

    The study analyzed scenarios of the transition from stationary dynamics to fluctuations in the size of phytoand zooplankton for various values of intrapopulation parameters determining the nature of the dynamics of the species constituting the community, and the parameters of their interaction. The focus is on exploring the complex modes of community dynamics. In the framework of the model used for describing dynamics of phytoplankton in the absence of interspecific interaction, phytoplankton dynamics undergoes a series of perioddoubling bifurcations. At the same time, with zooplankton appearance, the cascade of period-doubling bifurcations in phytoplankton and the community as a whole is realized earlier (at lower reproduction rates of phytoplankton cells) than in the case when phytoplankton develops in isolation. Furthermore, the variation in the cannibalism level in zooplankton can significantly change both the existing dynamics in the community and its bifurcation; e.g., with a certain structure of zooplankton food relationships the realization of Neimark–Sacker bifurcation scenario in the community is possible. Considering the cannibalism level in zooplankton can change due to the natural maturation processes and achievement of the carnivorous stage by some individuals, one can expect pronounced changes in the dynamic mode of the community, i.e. abrupt transitions from regular to quasiperiodic dynamics (according to Neimark–Sacker scenario) and further cycles with a short period (the implementation of period halving bifurcation).

    Views (last year): 3.
  3. Karpaev A.A., Aliev R.R.
    Application of simplified implicit Euler method for electrophysiological models
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 845-864

    A simplified implicit Euler method was analyzed as an alternative to the explicit Euler method, which is a commonly used method in numerical modeling in electrophysiology. The majority of electrophysiological models are quite stiff, since the dynamics they describe includes a wide spectrum of time scales: a fast depolarization, that lasts milliseconds, precedes a considerably slow repolarization, with both being the fractions of the action potential observed in excitable cells. In this work we estimate stiffness by a formula that does not require calculation of eigenvalues of the Jacobian matrix of the studied ODEs. The efficiency of the numerical methods was compared on the case of typical representatives of detailed and conceptual type models of excitable cells: Hodgkin–Huxley model of a neuron and Aliev–Panfilov model of a cardiomyocyte. The comparison of the efficiency of the numerical methods was carried out via norms that were widely used in biomedical applications. The stiffness ratio’s impact on the speedup of simplified implicit method was studied: a real gain in speed was obtained for the Hodgkin–Huxley model. The benefits of the usage of simple and high-order methods for electrophysiological models are discussed along with the discussion of one method’s stability issues. The reasons for using simplified instead of high-order methods during practical simulations were discussed in the corresponding section. We calculated higher order derivatives of the solutions of Hodgkin-Huxley model with various stiffness ratios; their maximum absolute values appeared to be quite large. A numerical method’s approximation constant’s formula contains the latter and hence ruins the effect of the other term (a small factor which depends on the order of approximation). This leads to the large value of global error. We committed a qualitative stability analysis of the explicit Euler method and were able to estimate the model’s parameters influence on the border of the region of absolute stability. The latter is used when setting the value of the timestep for simulations a priori.

  4. Safiullina L.F., Gubaydullin I.M.
    Analysis of the identifiability of the mathematical model of propane pyrolysis
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1045-1057

    The article presents the numerical modeling and study of the kinetic model of propane pyrolysis. The study of the reaction kinetics is a necessary stage in modeling the dynamics of the gas flow in the reactor.

    The kinetic model of propane pyrolysis is a nonlinear system of ordinary differential equations of the first order with parameters, the role of which is played by the reaction rate constants. Math modeling of processes is based on the use of the mass conservation law. To solve an initial (forward) problem, implicit methods for solving stiff ordinary differential equation systems are used. The model contains 60 input kinetic parameters and 17 output parameters corresponding to the reaction substances, of which only 9 are observable. In the process of solving the problem of estimating parameters (inverse problem), there is a question of non-uniqueness of the set of parameters that satisfy the experimental data. Therefore, before solving the inverse problem, the possibility of determining the parameters of the model is analyzed (analysis of identifiability).

    To analyze identifiability, we use the orthogonal method, which has proven itself well for analyzing models with a large number of parameters. The algorithm is based on the analysis of the sensitivity matrix by the methods of differential and linear algebra, which shows the degree of dependence of the unknown parameters of the models on the given measurements. The analysis of sensitivity and identifiability showed that the parameters of the model are stably determined from a given set of experimental data. The article presents a list of model parameters from most to least identifiable. Taking into account the analysis of the identifiability of the mathematical model, restrictions were introduced on the search for less identifiable parameters when solving the inverse problem.

    The inverse problem of estimating the parameters was solved using a genetic algorithm. The article presents the found optimal values of the kinetic parameters. A comparison of the experimental and calculated dependences of the concentrations of propane, main and by-products of the reaction on temperature for different flow rates of the mixture is presented. The conclusion about the adequacy of the constructed mathematical model is made on the basis of the correspondence of the results obtained to physicochemical laws and experimental data.

  5. Syzranova N.G., Andruschenko V.A.
    Numerical modeling of physical processes leading to the destruction of meteoroids in the Earth’s atmosphere
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 835-851

    Within the framework of the actual problem of comet-asteroid danger, the physical processes causing the destruction and fragmentation of meteor bodies in the Earth’s atmosphere are numerically investigated. Based on the developed physicalmathematical models that determines the movements of space objects of natural origin in the atmosphere and their interaction with it, the fall of three, one of the largest and by some parameters unusual bolides in the history of meteoritics, are considered: Tunguska, Vitim and Chelyabinsk. Their singularity lies in the absence of any material meteorite remains and craters in the area of the alleged crash site for the first two bodies and the non-detection, as it is assumed, of the main mother body for the third body (due to the too small amount of mass of the fallen fragments compared to the estimated mass). The effect of aerodynamic loads and heat flows on these bodies are studied, which leads to intensive surface mass loss and possible mechanical destruction. The velocities of the studied celestial bodies and the change in their masses are determined from the modernized system of equations of the theory of meteoric physics. An important factor that is taken into account here is the variability of the meteorite mass entrainment parameter under the action of heat fluxes (radiation and convective) along the flight path. The process of fragmentation of meteoroids in this paper is considered within the framework of a progressive crushing model based on the statistical theory of strength, taking into account the influence of the scale factor on the ultimate strength of objects. The phenomena and effects arising at various kinematic and physical parameters of each of these bodies are revealed. In particular, the change in the ballistics of their flight in the denser layers of the atmosphere, consisting in the transition from the fall mode to the ascent mode. At the same time, the following scenarios of the event can be realized: 1) the return of the body back to outer space at its residual velocity greater than the second cosmic one; 2) the transition of the body to the orbit of the Earth satellite at a residual velocity greater than the first cosmic one; 3) at lower values of the residual velocity of the body, its return after some time to the fall mode and falling out at a considerable distance from the intended crash site. It is the implementation of one of these three scenarios of the event that explains, for example, the absence of material traces, including craters, in the case of the Tunguska bolide in the vicinity of the forest collapse. Assumptions about the possibility of such scenarios have been made earlier by other authors, and in this paper their implementation is confirmed by the results of numerical calculations.

  6. Darwish A., Leonenko V.N.
    Reducing computational complexity in agent-based epidemiological model calibration: application of deep learning surrogates
    Computer Research and Modeling, 2026, v. 18, no. 1, pp. 185-200

    Acute respiratory infections are a major public health concern because they are the leading cause of illness and death in many countries. Therefore, there is great interest in developing models and methods capable of modeling the spread of these infections within communities, with the aim of controlling outbreaks and preventing their spread. Agent-based models (ABM) are one of the most important tools in epidemiological research for modeling epidemic dynamics in realistic populations, but they face significant challenges in terms of computational complexity in their operation and calibration of epidemiological data, as parameter estimation typically requires repeated simulations across large parameter spaces to determine plausible values for key epidemiological parameters. This paper addresses the problem of alleviating computational constraints in the inverse problem of calibrating an ABM model for simulating the spread of respiratory infections in Saint Petersburg. The paper proposes the application of machine learning surrogate to link epidemic trajectories to underlying epidemiological parameters, enabling them to quickly infer parameter estimates from observed epidemic data. This is done by formulating the task of calibrating ABMs against epidemiological data as a supervised learning problem, where sequences extracted from epidemiological trajectories are associated with underlying epidemiological parameters. The research was based on evaluating the performance of attention-based sequence modeling, probabilistic deep learning, and distributional regression for inferring parameter estimates from truncated sequences of epidemic trajectories. Experimental evaluations have demonstrated the effectiveness of this approach and its practical and straightforward application. The results also indicated the superiority of attention-based sequence modeling, as it showed more consistent performance across metrics and horizons in accurate parameter estimation and credible uncertainty quantification. Distributional regression modeling also showed good performance with specific strengths in point accuracy while probabilistic deep learning performed poorly, especially at longer input horizons.

  7. Kolobov A.V., Polezhaev A.A.
    Influence of random malignant cell motility on growing tumor front stability
    Computer Research and Modeling, 2009, v. 1, no. 2, pp. 225-232

    Chemotaxis plays an important role in morphogenesis and processes of structure formation in nature. Both unicellular organisms and single cells in tissue demonstrate this property. In vitro experiments show that many types of transformed cell, especially metastatic competent, are capable for directed motion in response usually to chemical signal. There is a number of theoretical papers on mathematical modeling of tumour growth and invasion using Keller-Segel model for the chemotactic motility of cancer cells. One of the crucial questions for using the chemotactic term in modelling of tumour growth is a lack of reliable quantitative estimation of its parameters. The 2-D mathematical model of tumour growth and invasion, which takes into account only random cell motility and convective fluxes in compact tissue, has showed that due to competitive mechanism tumour can grow toward sources of nutrients in absence of chemotactic cell motility.

    Views (last year): 5. Citations: 7 (RSCI).
  8. Shumov V.V.
    Analysis of socio-informational influence through the examples of US wars in Korea, Vietnam, and Iraq
    Computer Research and Modeling, 2014, v. 6, no. 1, pp. 167-184

    In the first section of the paper a definition of presentation (perception) functions — components of individual’s subjective view of the world — are proposed. Using the basic psychophysical law formulated by S. Stevens, and relying on the hypotheses of socialization, rationality, individual choice, complexity of informational influences, dynamics of ideas and perceptions, and accessibility, formal dependence was derived allowing to calculate the function of presentation (perception) for probabilistic indicators (with known distribution function or subjective probability) and of interval type. In the second and third sections parameters of the presentation function according to surveys of the U.S. population related to the war in Korea, Vietnam, and Iraq are estimated.

    Views (last year): 2. Citations: 3 (RSCI).
  9. Stonyakin F.S., Savchuk O.S., Baran I.V., Alkousa M.S., Titov A.A.
    Analogues of the relative strong convexity condition for relatively smooth problems and adaptive gradient-type methods
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 413-432

    This paper is devoted to some variants of improving the convergence rate guarantees of the gradient-type algorithms for relatively smooth and relatively Lipschitz-continuous problems in the case of additional information about some analogues of the strong convexity of the objective function. We consider two classes of problems, namely, convex problems with a relative functional growth condition, and problems (generally, non-convex) with an analogue of the Polyak – Lojasiewicz gradient dominance condition with respect to Bregman divergence. For the first type of problems, we propose two restart schemes for the gradient type methods and justify theoretical estimates of the convergence of two algorithms with adaptively chosen parameters corresponding to the relative smoothness or Lipschitz property of the objective function. The first of these algorithms is simpler in terms of the stopping criterion from the iteration, but for this algorithm, the near-optimal computational guarantees are justified only on the class of relatively Lipschitz-continuous problems. The restart procedure of another algorithm, in its turn, allowed us to obtain more universal theoretical results. We proved a near-optimal estimate of the complexity on the class of convex relatively Lipschitz continuous problems with a functional growth condition. We also obtained linear convergence rate guarantees on the class of relatively smooth problems with a functional growth condition. For a class of problems with an analogue of the gradient dominance condition with respect to the Bregman divergence, estimates of the quality of the output solution were obtained using adaptively selected parameters. We also present the results of some computational experiments illustrating the performance of the methods for the second approach at the conclusion of the paper. As examples, we considered a linear inverse Poisson problem (minimizing the Kullback – Leibler divergence), its regularized version which allows guaranteeing a relative strong convexity of the objective function, as well as an example of a relatively smooth and relatively strongly convex problem. In particular, calculations show that a relatively strongly convex function may not satisfy the relative variant of the gradient dominance condition.

  10. Rusyak I.G., Tenenev V.A.
    Modeling of ballistics of an artillery shot taking into account the spatial distribution of parameters and backpressure
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1123-1147

    The paper provides a comparative analysis of the results obtained by various approaches to modeling the process of artillery shot. In this connection, the main problem of internal ballistics and its particular case of the Lagrange problem are formulated in averaged parameters, where, within the framework of the assumptions of the thermodynamic approach, the distribution of pressure and gas velocity over the projectile space for a channel of variable cross section is taken into account for the first time. The statement of the Lagrange problem is also presented in the framework of the gas-dynamic approach, taking into account the spatial (one-dimensional and two-dimensional axisymmetric) changes in the characteristics of the ballistic process. The control volume method is used to numerically solve the system of Euler gas-dynamic equations. Gas parameters at the boundaries of control volumes are determined using a selfsimilar solution to the Riemann problem. Based on the Godunov method, a modification of the Osher scheme is proposed, which allows to implement a numerical calculation algorithm with a second order of accuracy in coordinate and time. The solutions obtained in the framework of the thermodynamic and gas-dynamic approaches are compared for various loading parameters. The effect of projectile mass and chamber broadening on the distribution of the ballistic parameters of the shot and the dynamics of the projectile motion was studied. It is shown that the thermodynamic approach, in comparison with the gas-dynamic approach, leads to a systematic overestimation of the estimated muzzle velocity of the projectile in the entire range of parameters studied, while the difference in muzzle velocity can reach 35%. At the same time, the discrepancy between the results obtained in the framework of one-dimensional and two-dimensional gas-dynamic models of the shot in the same range of change in parameters is not more than 1.3%.

    A spatial gas-dynamic formulation of the backpressure problem is given, which describes the change in pressure in front of an accelerating projectile as it moves along the barrel channel. It is shown that accounting the projectile’s front, considered in the two-dimensional axisymmetric formulation of the problem, leads to a significant difference in the pressure fields behind the front of the shock wave, compared with the solution in the framework of the onedimensional formulation of the problem, where the projectile’s front is not possible to account. It is concluded that this can significantly affect the results of modeling ballistics of a shot at high shooting velocities.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"