Результаты поиска по 'modeling methods':
Найдено статей: 404
  1. Abramova E.P., Ryazanova T.V.
    Dynamic regimes of the stochastic “prey – predatory” model with competition and saturation
    Computer Research and Modeling, 2019, v. 11, no. 3, pp. 515-531

    We consider “predator – prey” model taking into account the competition of prey, predator for different from the prey resources, and their interaction described by the second type Holling trophic function. An analysis of the attractors is carried out depending on the coefficient of competition of predators. In the deterministic case, this model demonstrates the complex behavior associated with the local (Andronov –Hopf and saddlenode) and global (birth of a cycle from a separatrix loop) bifurcations. An important feature of this model is the disappearance of a stable cycle due to a saddle-node bifurcation. As a result of the presence of competition in both populations, parametric zones of mono- and bistability are observed. In parametric zones of bistability the system has either coexisting two equilibria or a cycle and equilibrium. Here, we investigate the geometrical arrangement of attractors and separatrices, which is the boundary of basins of attraction. Such a study is an important component in understanding of stochastic phenomena. In this model, the combination of the nonlinearity and random perturbations leads to the appearance of new phenomena with no analogues in the deterministic case, such as noise-induced transitions through the separatrix, stochastic excitability, and generation of mixed-mode oscillations. For the parametric study of these phenomena, we use the stochastic sensitivity function technique and the confidence domain method. In the bistability zones, we study the deformations of the equilibrium or oscillation regimes under stochastic perturbation. The geometric criterion for the occurrence of such qualitative changes is the intersection of confidence domains and the separatrix of the deterministic model. In the zone of monostability, we evolve the phenomena of explosive change in the size of population as well as extinction of one or both populations with minor changes in external conditions. With the help of the confidence domains method, we solve the problem of estimating the proximity of a stochastic population to dangerous boundaries, upon reaching which the coexistence of populations is destroyed and their extinction is observed.

    Views (last year): 28.
  2. Neverova G.P., Zhdanova O.L., Kolbina E.A., Abakumov A.I.
    A plankton community: a zooplankton effect in phytoplankton dynamics
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 751-768

    The paper uses methods of mathematical modeling to estimate a zooplankton influence on the dynamics of phytoplankton abundance. We propose a three-component model of the “phytoplankton–zooplankton” community with discrete time, considering a heterogeneity of zooplankton according to the developmental stage and type of feeding; the model takes into account cannibalism in zooplankton community, during which mature individuals of some of its species consume juvenile ones. Survival rates at the early stages of zooplankton life cycle depend explicitly on the interaction between zooplankton and phytoplankton. Loss of phytoplankton biomass because of zooplankton consumption is explicitly considered. We use the Holling functional response of type II to describe saturation during biomass consumption. The dynamics of the phytoplankton community is represented by the Ricker model, which allows to take into account the restriction of phytoplankton biomass growth by the availability of external resources (mineral nutrition, oxygen, light, etc.) implicitly.

    The study analyzed scenarios of the transition from stationary dynamics to fluctuations in the size of phytoand zooplankton for various values of intrapopulation parameters determining the nature of the dynamics of the species constituting the community, and the parameters of their interaction. The focus is on exploring the complex modes of community dynamics. In the framework of the model used for describing dynamics of phytoplankton in the absence of interspecific interaction, phytoplankton dynamics undergoes a series of perioddoubling bifurcations. At the same time, with zooplankton appearance, the cascade of period-doubling bifurcations in phytoplankton and the community as a whole is realized earlier (at lower reproduction rates of phytoplankton cells) than in the case when phytoplankton develops in isolation. Furthermore, the variation in the cannibalism level in zooplankton can significantly change both the existing dynamics in the community and its bifurcation; e.g., with a certain structure of zooplankton food relationships the realization of Neimark–Sacker bifurcation scenario in the community is possible. Considering the cannibalism level in zooplankton can change due to the natural maturation processes and achievement of the carnivorous stage by some individuals, one can expect pronounced changes in the dynamic mode of the community, i.e. abrupt transitions from regular to quasiperiodic dynamics (according to Neimark–Sacker scenario) and further cycles with a short period (the implementation of period halving bifurcation).

    Views (last year): 3.
  3. Zabotin, V.I., Chernyshevskij P.A.
    Extension of Strongin’s Global Optimization Algorithm to a Function Continuous on a Compact Interval
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1111-1119

    The Lipschitz continuous property has been used for a long time to solve the global optimization problem and continues to be used. Here we can mention the work of Piyavskii, Yevtushenko, Strongin, Shubert, Sergeyev, Kvasov and others. Most papers assume a priori knowledge of the Lipschitz constant, but the derivation of this constant is a separate problem. Further still, we must prove that an objective function is really Lipschitz, and it is a complicated problem too. In the case where the Lipschitz continuity is established, Strongin proposed an algorithm for global optimization of a satisfying Lipschitz condition on a compact interval function without any a priori knowledge of the Lipschitz estimate. The algorithm not only finds a global extremum, but it determines the Lipschitz estimate too. It is known that every function that satisfies the Lipchitz condition on a compact convex set is uniformly continuous, but the reverse is not always true. However, there exist models (Arutyunova, Dulliev, Zabotin) whose study requires a minimization of the continuous but definitely not Lipschitz function. One of the algorithms for solving such a problem was proposed by R. J. Vanderbei. In his work he introduced some generalization of the Lipchitz property named $\varepsilon$-Lipchitz and proved that a function defined on a compact convex set is uniformly continuous if and only if it satisfies the $\varepsilon$-Lipchitz condition. The above-mentioned property allowed him to extend Piyavskii’s method. However, Vanderbei assumed that for a given value of $\varepsilon$ it is possible to obtain an associate Lipschitz $\varepsilon$-constant, which is a very difficult problem. Thus, there is a need to construct, for a function continuous on a compact convex domain, a global optimization algorithm which works in some way like Strongin’s algorithm, i.e., without any a priori knowledge of the Lipschitz $\varepsilon$-constant. In this paper we propose an extension of Strongin’s global optimization algorithm to a function continuous on a compact interval using the $\varepsilon$-Lipchitz conception, prove its convergence and solve some numerical examples using the software that implements the developed method.

  4. Reshitko M.A., Ougolnitsky G.A., Usov A.B.
    Numerical method for finding Nash and Shtakelberg equilibria in river water quality control models
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 653-667

    In this paper we consider mathematical model to control water quality. We study a system with two-level hierarchy: one environmental organization (supervisor) at the top level and a few industrial enterprises (agents) at the lower level. The main goal of the supervisor is to keep water pollution level below certain value, while enterprises pollute water, as a side effect of the manufacturing process. Supervisor achieves its goal by charging a penalty for enterprises. On the other hand, enterprises choose how much to purify their wastewater to maximize their income.The fee increases the budget of the supervisor. Moreover, effulent fees are charged for the quantity and/or quality of the discharged pollution. Unfortunately, in practice, such charges are ineffective due to the insufficient tax size. The article solves the problem of determining the optimal size of the charge for pollution discharge, which allows maintaining the quality of river water in the rear range.

    We describe system members goals with target functionals, and describe water pollution level and enterprises state as system of ordinary differential equations. We consider the problem from both supervisor and enterprises sides. From agents’ point a normal-form game arises, where we search for Nash equilibrium and for the supervisor, we search for Stackelberg equilibrium. We propose numerical algorithms for finding both Nash and Stackelberg equilibrium. When we construct Nash equilibrium, we solve optimal control problem using Pontryagin’s maximum principle. We construct Hamilton’s function and solve corresponding system of partial differential equations with shooting method and finite difference method. Numerical calculations show that the low penalty for enterprises results in increasing pollution level, when relatively high penalty can result in enterprises bankruptcy. This leads to the problem of choosing optimal penalty, which requires considering problem from the supervisor point. In that case we use the method of qualitatively representative scenarios for supervisor and Pontryagin’s maximum principle for agents to find optimal control for the system. At last, we compute system consistency ratio and test algorithms for different data. The results show that a hierarchical control is required to provide system stability.

  5. Karpaev A.A., Aliev R.R.
    Application of simplified implicit Euler method for electrophysiological models
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 845-864

    A simplified implicit Euler method was analyzed as an alternative to the explicit Euler method, which is a commonly used method in numerical modeling in electrophysiology. The majority of electrophysiological models are quite stiff, since the dynamics they describe includes a wide spectrum of time scales: a fast depolarization, that lasts milliseconds, precedes a considerably slow repolarization, with both being the fractions of the action potential observed in excitable cells. In this work we estimate stiffness by a formula that does not require calculation of eigenvalues of the Jacobian matrix of the studied ODEs. The efficiency of the numerical methods was compared on the case of typical representatives of detailed and conceptual type models of excitable cells: Hodgkin–Huxley model of a neuron and Aliev–Panfilov model of a cardiomyocyte. The comparison of the efficiency of the numerical methods was carried out via norms that were widely used in biomedical applications. The stiffness ratio’s impact on the speedup of simplified implicit method was studied: a real gain in speed was obtained for the Hodgkin–Huxley model. The benefits of the usage of simple and high-order methods for electrophysiological models are discussed along with the discussion of one method’s stability issues. The reasons for using simplified instead of high-order methods during practical simulations were discussed in the corresponding section. We calculated higher order derivatives of the solutions of Hodgkin-Huxley model with various stiffness ratios; their maximum absolute values appeared to be quite large. A numerical method’s approximation constant’s formula contains the latter and hence ruins the effect of the other term (a small factor which depends on the order of approximation). This leads to the large value of global error. We committed a qualitative stability analysis of the explicit Euler method and were able to estimate the model’s parameters influence on the border of the region of absolute stability. The latter is used when setting the value of the timestep for simulations a priori.

  6. Volokhova A.V., Zemlyanay E.V., Kachalov V.V., Rikhvitskiy V.S.
    Simulation of the gas condensate reservoir depletion
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1081-1095

    One of problems in developing the gas condensate fields lies on the fact that the condensed hydrocarbons in the gas-bearing layer can get stuck in the pores of the formation and hence cannot be extracted. In this regard, research is underway to increase the recoverability of hydrocarbons in such fields. This research includes a wide range of studies on mathematical simulations of the passage of gas condensate mixtures through a porous medium under various conditions.

    In the present work, within the classical approach based on the Darcy law and the law of continuity of flows, we formulate an initial-boundary value problem for a system of nonlinear differential equations that describes a depletion of a multicomponent gas-condensate mixture in porous reservoir. A computational scheme is developed on the basis of the finite-difference approximation and the fourth order Runge .Kutta method. The scheme can be used for simulations both in the spatially one-dimensional case, corresponding to the conditions of the laboratory experiment, and in the two-dimensional case, when it comes to modeling a flat gas-bearing formation with circular symmetry.

    The computer implementation is based on the combination of C++ and Maple tools, using the MPI parallel programming technique to speed up the calculations. The calculations were performed on the HybriLIT cluster of the Multifunctional Information and Computing Complex of the Laboratory of Information Technologies of the Joint Institute for Nuclear Research.

    Numerical results are compared with the experimental data on the pressure dependence of output of a ninecomponent hydrocarbon mixture obtained at a laboratory facility (VNIIGAZ, Ukhta). The calculations were performed for two types of porous filler in the laboratory model of the formation: terrigenous filler at 25 .„R and carbonate one at 60 .„R. It is shown that the approach developed ensures an agreement of the numerical results with experimental data. By fitting of numerical results to experimental data on the depletion of the laboratory reservoir, we obtained the values of the parameters that determine the inter-phase transition coefficient for the simulated system. Using the same parameters, a computer simulation of the depletion of a thin gas-bearing layer in the circular symmetry approximation was carried out.

  7. Safiullina L.F., Gubaydullin I.M.
    Analysis of the identifiability of the mathematical model of propane pyrolysis
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1045-1057

    The article presents the numerical modeling and study of the kinetic model of propane pyrolysis. The study of the reaction kinetics is a necessary stage in modeling the dynamics of the gas flow in the reactor.

    The kinetic model of propane pyrolysis is a nonlinear system of ordinary differential equations of the first order with parameters, the role of which is played by the reaction rate constants. Math modeling of processes is based on the use of the mass conservation law. To solve an initial (forward) problem, implicit methods for solving stiff ordinary differential equation systems are used. The model contains 60 input kinetic parameters and 17 output parameters corresponding to the reaction substances, of which only 9 are observable. In the process of solving the problem of estimating parameters (inverse problem), there is a question of non-uniqueness of the set of parameters that satisfy the experimental data. Therefore, before solving the inverse problem, the possibility of determining the parameters of the model is analyzed (analysis of identifiability).

    To analyze identifiability, we use the orthogonal method, which has proven itself well for analyzing models with a large number of parameters. The algorithm is based on the analysis of the sensitivity matrix by the methods of differential and linear algebra, which shows the degree of dependence of the unknown parameters of the models on the given measurements. The analysis of sensitivity and identifiability showed that the parameters of the model are stably determined from a given set of experimental data. The article presents a list of model parameters from most to least identifiable. Taking into account the analysis of the identifiability of the mathematical model, restrictions were introduced on the search for less identifiable parameters when solving the inverse problem.

    The inverse problem of estimating the parameters was solved using a genetic algorithm. The article presents the found optimal values of the kinetic parameters. A comparison of the experimental and calculated dependences of the concentrations of propane, main and by-products of the reaction on temperature for different flow rates of the mixture is presented. The conclusion about the adequacy of the constructed mathematical model is made on the basis of the correspondence of the results obtained to physicochemical laws and experimental data.

  8. Suzdaltsev V.A., Suzdaltsev I.V., Tarhavova E.G.
    Fuzzy knowledge extraction in the development of expert predictive diagnostic systems
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1395-1408

    Expert systems imitate professional experience and thinking process of a specialist to solve problems in various subject areas. An example of the problem that it is expedient to solve with the help of the expert system is the problem of forming a diagnosis that arises in technology, medicine, and other fields. When solving the diagnostic problem, it is necessary to anticipate the occurrence of critical or emergency situations in the future. They are situations, which require timely intervention of specialists to prevent critical aftermath. Fuzzy sets theory provides one of the approaches to solve ill-structured problems, diagnosis-making problems belong to which. The theory of fuzzy sets provides means for the formation of linguistic variables, which are helpful to describe the modeled process. Linguistic variables are elements of fuzzy logical rules that simulate the reasoning of professionals in the subject area. To develop fuzzy rules it is necessary to resort to a survey of experts. Knowledge engineers use experts’ opinion to evaluate correspondence between a typical current situation and the risk of emergency in the future. The result of knowledge extraction is a description of linguistic variables that includes a combination of signs. Experts are involved in the survey to create descriptions of linguistic variables and present a set of simulated situations.When building such systems, the main problem of the survey is laboriousness of the process of interaction of knowledge engineers with experts. The main reason is the multiplicity of questions the expert must answer. The paper represents reasoning of the method, which allows knowledge engineer to reduce the number of questions posed to the expert. The paper describes the experiments carried out to test the applicability of the proposed method. An expert system for predicting risk groups for neonatal pathologies and pregnancy pathologies using the proposed knowledge extraction method confirms the feasibility of the proposed approach.

  9. Makarov I.S., Bagantsova E.R., Iashin P.A., Kovaleva M.D., Zakharova E.M.
    Development of and research into a rigid algorithm for analyzing Twitter publications and its influence on the movements of the cryptocurrency market
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 157-170

    Social media is a crucial indicator of the position of assets in the financial market. The paper describes the rigid solution for the classification problem to determine the influence of social media activity on financial market movements. Reputable crypto traders influencers are selected. Twitter posts packages are used as data. The methods of text, which are characterized by the numerous use of slang words and abbreviations, and preprocessing consist in lemmatization of Stanza and the use of regular expressions. A word is considered as an element of a vector of a data unit in the course of solving the problem of binary classification. The best markup parameters for processing Binance candles are searched for. Methods of feature selection, which is necessary for a precise description of text data and the subsequent process of establishing dependence, are represented by machine learning and statistical analysis. First, the feature selection is used based on the information criterion. This approach is implemented in a random forest model and is relevant for the task of feature selection for splitting nodes in a decision tree. The second one is based on the rigid compilation of a binary vector during a rough check of the presence or absence of a word in the package and counting the sum of the elements of this vector. Then a decision is made depending on the superiority of this sum over the threshold value that is predetermined previously by analyzing the frequency distribution of mentions of the word. The algorithm used to solve the problem was named benchmark and analyzed as a tool. Similar algorithms are often used in automated trading strategies. In the course of the study, observations of the influence of frequently occurring words, which are used as a basis of dimension 2 and 3 in vectorization, are described as well.

  10. Bernadotte A., Mazurin A.D.
    Optimization of the brain command dictionary based on the statistical proximity criterion in silent speech recognition task
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 675-690

    In our research, we focus on the problem of classification for silent speech recognition to develop a brain– computer interface (BCI) based on electroencephalographic (EEG) data, which will be capable of assisting people with mental and physical disabilities and expanding human capabilities in everyday life. Our previous research has shown that the silent pronouncing of some words results in almost identical distributions of electroencephalographic signal data. Such a phenomenon has a suppressive impact on the quality of neural network model behavior. This paper proposes a data processing technique that distinguishes between statistically remote and inseparable classes in the dataset. Applying the proposed approach helps us reach the goal of maximizing the semantic load of the dictionary used in BCI.

    Furthermore, we propose the existence of a statistical predictive criterion for the accuracy of binary classification of the words in a dictionary. Such a criterion aims to estimate the lower and the upper bounds of classifiers’ behavior only by measuring quantitative statistical properties of the data (in particular, using the Kolmogorov – Smirnov method). We show that higher levels of classification accuracy can be achieved by means of applying the proposed predictive criterion, making it possible to form an optimized dictionary in terms of semantic load for the EEG-based BCIs. Furthermore, using such a dictionary as a training dataset for classification problems grants the statistical remoteness of the classes by taking into account the semantic and phonetic properties of the corresponding words and improves the classification behavior of silent speech recognition models.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"