Результаты поиска по 'm method':
Найдено статей: 617
  1. Aksenov A.A., Zhluktov S.V., Kalugina M.D., Kashirin V.S., Lobanov A.I., Shaurman D.V.
    Reduced mathematical model of blood coagulation taking into account thrombin activity switching as a basis for estimation of hemodynamic effects and its implementation in FlowVision package
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1039-1067

    The possibility of numerical 3D simulation of thrombi formation is considered.

    The developed up to now detailed mathematical models describing formation of thrombi and clots include a great number of equations. Being implemented in a CFD code, the detailed mathematical models require essential computer resources for simulation of the thrombi growth in a blood flow. A reasonable alternative way is using reduced mathematical models. Two models based on the reduced mathematical model for the thrombin generation are described in the given paper.

    The first model describes growth of a thrombus in a great vessel (artery). The artery flows are essentially unsteady. They are characterized by pulse waves. The blood velocity here is high compared to that in the vein tree. The reduced model for the thrombin generation and the thrombus growth in an artery is relatively simple. The processes accompanying the thrombin generation in arteries are well described by the zero-order approximation.

    A vein flow is characterized lower velocity value, lower gradients, and lower shear stresses. In order to simulate the thrombin generation in veins, a more complex system of equations has to be solved. The model must allow for all the non-linear terms in the right-hand sides of the equations.

    The simulation is carried out in the industrial software FlowVision.

    The performed numerical investigations have shown the suitability of the reduced models for simulation of thrombin generation and thrombus growth. The calculations demonstrate formation of the recirculation zone behind a thrombus. The concentration of thrombin and the mass fraction of activated platelets are maximum here. Formation of such a zone causes slow growth of the thrombus downstream. At the upwind part of the thrombus, the concentration of activated platelets is low, and the upstream thrombus growth is negligible.

    When the blood flow variation during a hart cycle is taken into account, the thrombus growth proceeds substantially slower compared to the results obtained under the assumption of constant (averaged over a hard cycle) conditions. Thrombin and activated platelets produced during diastole are quickly carried away by the blood flow during systole. Account of non-Newtonian rheology of blood noticeably affects the results.

  2. Reshitko M.A., Usov A.B., Ougolnitsky G.A.
    Water consumption control model for regions with low water availability
    Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1395-1410

    This paper considers the problem of water consumption in the regions of Russia with low water availability. We provide a review of the existing methods to control quality and quantity of water resources at different scales — from households to worldwide. The paper itself considers regions with low “water availability” parameter which is amount of water per person per year. Special attention is paid to the regions, where this parameter is low because of natural features of the region, not because of high population. In such regions many resources are spend on water processing infrastructure to store water and transport water from other regions. In such regions the main water consumers are industry and agriculture.

    We propose dynamic two-level hierarchical model which matches water consumption of a region with its gross regional product. On the top level there is a regional administration (supervisor) and on the lower level there are region enterprises (agents). The supervisor sets fees for water consumption. We study the model with Pontryagin’s maximum principle and provide agents’s optimal control in analytical form. For the supervisor’s control we provide numerical algorithm. The model has six free coefficients, which can be chosen so the model represents a particular region. We use data from Russia Federal State Statistics Service for identification process of a model. For numerical analysis we use trust region reflective algorithms. We provide calculations for a few regions with low water availability. It is shown that it is possible to reduce water consumption of a region more than by 20% while gross regional product drop is less than 10%.

  3. Rusyak I.G., Nefedov D.G.
    Solution of optimization problem of wood fuel facility location by the thermal energy cost criterion
    Computer Research and Modeling, 2012, v. 4, no. 3, pp. 651-659

    The paper contains a mathematical model for the optimal location of enterprises producing fuel from renewable wood waste for the regional distributed heating supply system. Optimization is based on total cost minimization of the end product – the thermal energy from wood fuel. A method for solving the problem is based on genetic algorithm. The paper also shows the practical results of the model by example of Udmurt Republic.

    Views (last year): 5. Citations: 2 (RSCI).
  4. Arkhangelskaya T.A., Khokhlova O.S., Miakshina T.N.
    Mathematical modeling of soil hydrology in two arable Chernozems with different depth to carbonates
    Computer Research and Modeling, 2016, v. 8, no. 2, pp. 401-410

    Simulation of soil hydrology was performed for two plots: the first one was under corn monocrop and another one was under bare fallow for 50 years. The depth to carbonates is 140–160 cm under corn and 70–80 cm under bare fallow. Mathematical modeling with the HYDRUS-1D software and the FAO56 method demonstrated that soil hydrology was different for the two plots. Soil moisture was generally higher under bare fallow than under corn. The upward fluxes were significantly greater under bare fallow than under corn, and they were obtained for a thicker soil layer.

    Views (last year): 2. Citations: 1 (RSCI).
  5. Govorkov D.A., Novikov V.P., Solovyev I.G., Tsibulsky V.R.
    Interval analysis of vegetation cover dynamics
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1191-1205

    In the development of the previously obtained result on modeling the dynamics of vegetation cover, due to variations in the temperature background, a new scheme for the interval analysis of the dynamics of floristic images of formations is presented in the case when the parameter of the response rate of the model of the dynamics of each counting plant species is set by the interval of scatter of its possible values. The detailed description of the functional parameters of macromodels of biodiversity, desired in fundamental research, taking into account the essential reasons for the observed evolutionary processes, may turn out to be a problematic task. The use of more reliable interval estimates of the variability of functional parameters “bypasses” the problem of uncertainty in the primary assessment of the evolution of the phyto-resource potential of the developed controlled territories. The solutions obtained preserve not only a qualitative picture of the dynamics of species diversity, but also give a rigorous, within the framework of the initial assumptions, a quantitative assessment of the degree of presence of each plant species. The practical significance of two-sided estimation schemes based on the construction of equations for the upper and lower boundaries of the trajectories of the scatter of solutions depends on the conditions and measure of proportional correspondence of the intervals of scatter of the initial parameters with the intervals of scatter of solutions. For dynamic systems, the desired proportionality is not always ensured. The given examples demonstrate the acceptable accuracy of interval estimation of evolutionary processes. It is important to note that the constructions of the estimating equations generate vanishing intervals of scatter of solutions for quasi-constant temperature perturbations of the system. In other words, the trajectories of stationary temperature states of the vegetation cover are not roughened by the proposed interval estimation scheme. The rigor of the result of interval estimation of the species composition of the vegetation cover of formations can become a determining factor when choosing a method in the problems of analyzing the dynamics of species diversity and the plant potential of territorial systems of resource-ecological monitoring. The possibilities of the proposed approach are illustrated by geoinformation images of the computational analysis of the dynamics of the vegetation cover of the Yamal Peninsula and by the graphs of the retro-perspective analysis of the floristic variability of the formations of the landscapelithological group “Upper” based on the data of the summer temperature background of the Salehard weather station from 2010 to 1935. The developed indicators of floristic variability and the given graphs characterize the dynamics of species diversity, both on average and individually in the form of intervals of possible states for each species of plant.

  6. Irkhin I.A., Bulatov V.G., Vorontsov K.V.
    Additive regularizarion of topic models with fast text vectorizartion
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1515-1528

    The probabilistic topic model of a text document collection finds two matrices: a matrix of conditional probabilities of topics in documents and a matrix of conditional probabilities of words in topics. Each document is represented by a multiset of words also called the “bag of words”, thus assuming that the order of words is not important for revealing the latent topics of the document. Under this assumption, the problem is reduced to a low-rank non-negative matrix factorization governed by likelihood maximization. In general, this problem is ill-posed having an infinite set of solutions. In order to regularize the solution, a weighted sum of optimization criteria is added to the log-likelihood. When modeling large text collections, storing the first matrix seems to be impractical, since its size is proportional to the number of documents in the collection. At the same time, the topical vector representation (embedding) of documents is necessary for solving many text analysis tasks, such as information retrieval, clustering, classification, and summarization of texts. In practice, the topical embedding is calculated for a document “on-the-fly”, which may require dozens of iterations over all the words of the document. In this paper, we propose a way to calculate a topical embedding quickly, by one pass over document words. For this, an additional constraint is introduced into the model in the form of an equation, which calculates the first matrix from the second one in linear time. Although formally this constraint is not an optimization criterion, in fact it plays the role of a regularizer and can be used in combination with other regularizers within the additive regularization framework ARTM. Experiments on three text collections have shown that the proposed method improves the model in terms of sparseness, difference, logLift and coherence measures of topic quality. The open source libraries BigARTM and TopicNet were used for the experiments.

  7. Tupitsa N.K.
    On accelerated adaptive methods and their modifications for alternating minimization
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 497-515

    In the first part of the paper we present convergence analysis of AGMsDR method on a new class of functions — in general non-convex with $M$-Lipschitz-continuous gradients that satisfy Polyak – Lojasiewicz condition. Method does not need the value of $\mu^{PL}>0$ in the condition and converges linearly with a scale factor $\left(1 - \frac{\mu^{PL}}{M}\right)$. It was previously proved that method converges as $O\left(\frac1{k^2}\right)$ if a function is convex and has $M$-Lipschitz-continuous gradient and converges linearly with a~scale factor $\left(1 - \sqrt{\frac{\mu^{SC}}{M}}\right)$ if the value of strong convexity parameter $\mu^{SC}>0$ is known. The novelty is that one can save linear convergence if $\frac{\mu^{PL}}{\mu^{SC}}$ is not known, but without square root in the scale factor.

    The second part presents modification of AGMsDR method for solving problems that allow alternating minimization (Alternating AGMsDR). The similar results are proved.

    As the result, we present adaptive accelerated methods that converge as $O\left(\min\left\lbrace\frac{M}{k^2},\,\left(1-{\frac{\mu^{PL}}{M}}\right)^{(k-1)}\right\rbrace\right)$ on a class of convex functions with $M$-Lipschitz-continuous gradient that satisfy Polyak – Lojasiewicz condition. Algorithms do not need values of $M$ and $\mu^{PL}$. If Polyak – Lojasiewicz condition does not hold, the convergence is $O\left(\frac1{k^2}\right)$, but no tuning needed.

    We also consider the adaptive catalyst envelope of non-accelerated gradient methods. The envelope allows acceleration up to $O\left(\frac1{k^2}\right)$. We present numerical comparison of non-accelerated adaptive gradient descent which is accelerated using adaptive catalyst envelope with AGMsDR, Alternating AGMsDR, APDAGD (Adaptive Primal-Dual Accelerated Gradient Descent) and Sinkhorn's algorithm on the problem dual to the optimal transport problem.

    Conducted experiments show faster convergence of alternating AGMsDR in comparison with described catalyst approach and AGMsDR, despite the same asymptotic rate $O\left(\frac1{k^2}\right)$. Such behavior can be explained by linear convergence of AGMsDR method and was tested on quadratic functions. Alternating AGMsDR demonstrated better performance in comparison with AGMsDR.

  8. Vassilevski Y.V., Simakov S.S., Gamilov T.M., Salamatova V.Yu., Dobroserdova T.K., Kopytov G.V., Bogdanov O.N., Danilov A.A., Dergachev M.A., Dobrovolskii D.D., Kosukhin O.N., Larina E.V., Meleshkina A.V., Mychka E.Yu., Kharin V.Yu., Chesnokova K.V., Shipilov A.A.
    Personalization of mathematical models in cardiology: obstacles and perspectives
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 911-930

    Most biomechanical tasks of interest to clinicians can be solved only using personalized mathematical models. Such models allow to formalize and relate key pathophysiological processes, basing on clinically available data evaluate non-measurable parameters that are important for the diagnosis of diseases, predict the result of a therapeutic or surgical intervention. The use of models in clinical practice imposes additional restrictions: clinicians require model validation on clinical cases, the speed and automation of the entire calculated technological chain, from processing input data to obtaining a result. Limitations on the simulation time, determined by the time of making a medical decision (of the order of several minutes), imply the use of reduction methods that correctly describe the processes under study within the framework of reduced models or machine learning tools.

    Personalization of models requires patient-oriented parameters, personalized geometry of a computational domain and generation of a computational mesh. Model parameters are estimated by direct measurements, or methods of solving inverse problems, or methods of machine learning. The requirement of personalization imposes severe restrictions on the number of fitted parameters that can be measured under standard clinical conditions. In addition to parameters, the model operates with boundary conditions that must take into account the patient’s characteristics. Methods for setting personalized boundary conditions significantly depend on the clinical setting of the problem and clinical data. Building a personalized computational domain through segmentation of medical images and generation of the computational grid, as a rule, takes a lot of time and effort due to manual or semi-automatic operations. Development of automated methods for setting personalized boundary conditions and segmentation of medical images with the subsequent construction of a computational grid is the key to the widespread use of mathematical modeling in clinical practice.

    The aim of this work is to review our solutions for personalization of mathematical models within the framework of three tasks of clinical cardiology: virtual assessment of hemodynamic significance of coronary artery stenosis, calculation of global blood flow after hemodynamic correction of complex heart defects, calculating characteristics of coaptation of reconstructed aortic valve.

  9. Abramov V.S., Petrov M.N.
    Application of the Dynamic Mode Decomposition in search of unstable modes in laminar-turbulent transition problem
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1069-1090

    Laminar-turbulent transition is the subject of an active research related to improvement of economic efficiency of air vehicles, because in the turbulent boundary layer drag increases, which leads to higher fuel consumption. One of the directions of such research is the search for efficient methods, that can be used to find the position of the transition in space. Using this information about laminar-turbulent transition location when designing an aircraft, engineers can predict its performance and profitability at the initial stages of the project. Traditionally, $e^N$ method is applied to find the coordinates of a laminar-turbulent transition. It is a well known approach in industry. However, despite its widespread use, this method has a number of significant drawbacks, since it relies on parallel flow assumption, which limits the scenarios for its application, and also requires computationally expensive calculations in a wide range of frequencies and wave numbers. Alternatively, flow analysis can be done by using Dynamic Mode Decomposition, which allows one to analyze flow disturbances using flow data directly. Since Dynamic Mode Decomposition is a dimensionality reduction method, the number of computations can be dramatically reduced. Furthermore, usage of Dynamic Mode Decomposition expands the applicability of the whole method, due to the absence of assumptions about the parallel flow in its derivation.

    The presented study proposes an approach to finding the location of a laminar-turbulent transition using the Dynamic Mode Decomposition method. The essence of this approach is to divide the boundary layer region into sets of subregions, for each of which the transition point is independently calculated, using Dynamic Mode Decomposition for flow analysis, after which the results are averaged to produce the final result. This approach is validated by laminar-turbulent transition predictions of subsonic and supersonic flows over a 2D flat plate with zero pressure gradient. The results demonstrate the fundamental applicability and high accuracy of the described method in a wide range of conditions. The study focuses on comparison with the $e^N$ method and proves the advantages of the proposed approach. It is shown that usage of Dynamic Mode Decomposition leads to significantly faster execution due to less intensive computations, while the accuracy is comparable to the such of the solution obtained with the $e^N$ method. This indicates the prospects for using the described approach in a real world applications.

  10. Varshavsky L.E.
    Approximate methods of studying dynamics of market structure
    Computer Research and Modeling, 2012, v. 4, no. 1, pp. 219-229

    An approach to computation of open-loop optimal Nash–Cournot strategies in dynamical games which is based on the Z-transform method and factorization is proposed. The main advantage of the proposed approach is that it permits to overcome the problems of instability of economic indicators of oligopolies arising when generalized Riccati equations are used.

    Views (last year): 3. Citations: 9 (RSCI).
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"