Результаты поиска по 'sets of results':
Найдено статей: 131
  1. Kalachin S.V.
    Fuzzy modeling of human susceptibility to panic situations
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 203-218

    The study of the mechanism for the development of mass panic in view of its extreme importance and social danger is an important scientific task. Available information about the mechanism of her development is based mainly on the work of psychologists and belongs to the category of inaccurate. Therefore, the theory of fuzzy sets has been chosen as a tool for developing a mathematical model of a person's susceptibility to panic situations. As a result of the study, an fuzzy model was developed, consisting of blocks: “Fuzzyfication”, where the degree of belonging of the values of the input parameters to fuzzy sets is calculated; “Inference” where, based on the degree of belonging of the input parameters, the resulting function of belonging of the output value to an odd model is calculated; “Defuzzyfication”, where using the center of gravity method, the only quantitative value of the output variable characterizing a person's susceptibility to panic situations is determined Since the real quantitative values for linguistic variables mental properties of a person are unknown, then to assess the quality of the developed model, without endangering people, it is not possible. Therefore, the quality of the results of fuzzy modeling was estimated by the calculated value of the determination coefficient R2, which showed that the developed fuzzy model belongs to the category of good quality models $(R^2 = 0.93)$, which confirms the legitimacy of the assumptions made during her development. In accordance with to the results of the simulation, human susceptibility to panic situations for sanguinics and cholerics can be attributed to “increased” (0.88), and for phlegmatics and melancholics — to “moderate” (0.38). This means that cholerics and sanguinics can become epicenters of panic and the initiators of stampede, and phlegmatics and melancholics — obstacles to evacuation routes. What should be taken into account when developing effective evacuation measures, the main task of which is to quickly and safely evacuate people from adverse conditions. In the approved methods, the calculation of normative values of safety parameters is based on simplified analytical models of human flow movement, because a large number of factors have to be taken into account, some of which are quantitatively uncertain. The obtained result in the form of quantitative estimates of a person's susceptibility to panic situations will increase the accuracy of calculations.

  2. Kalachin S.V.
    Fuzzy modeling the mechanism of transmitting panic state among people with various temperament species
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1079-1092

    A mass congestion of people always represents a potential danger and threat for their lives. In addition, every year in the world a very large number of people die because of the crush, the main cause of which is mass panic. Therefore, the study of the phenomenon of mass panic in view of her extreme social danger is an important scientific task. Available information, about the processes of her occurrence and spread refers to the category inaccurate. Therefore, the theory of fuzzy sets has been chosen as a tool for developing a mathematical model of the mechanism of transmitting panic state among people with various temperament species.

    When developing an fuzzy model, it was assumed that panic, from the epicenter of the shocking stimulus, spreads among people according to the wave principle, passing at different frequencies through different environments (types of human temperament), and is determined by the speed and intensity of the circular reaction of the mechanism of transmitting panic state among people. Therefore, the developed fuzzy model, along with two inputs, has two outputs — the speed and intensity of the circular reaction. In the block «Fuzzyfication», the degrees of membership of the numerical values of the input parameters to fuzzy sets are calculated. The «Inference» block at the input receives degrees of belonging for each input parameter and at the output determines the resulting function of belonging the speed of the circular reaction and her derivative, which is a function of belonging for the intensity of the circular reaction. In the «Defuzzyfication» block, using the center of gravity method, a quantitative value is determined for each output parameter. The quality assessment of the developed fuzzy model, carried out by calculating of the determination coefficient, showed that the developed mathematical model belongs to the category of good quality models.

    The result obtained in the form of quantitative assessments of the circular reaction makes it possible to improve the quality of understanding of the mental processes occurring during the transmission of the panic state among people. In addition, this makes it possible to improve existing and develop new models of chaotic humans behaviors. Which are designed to develop effective solutions in crisis situations, aimed at full or partial prevention of the spread of mass panic, leading to the emergence of panic flight and the appearance of human casualties.

     

  3. Voronina M.Y., Orlov Y.N.
    Identification of the author of the text by segmentation method
    Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1199-1210

    The paper describes a method for recognizing authors of literary texts by the proximity of fragments into which a separate text is divided to the standard of the author. The standard is the empirical frequency distribution of letter combinations, built on a training sample, which included expertly selected reliably known works of this author. A set of standards of different authors forms a library, within which the problem of identifying the author of an unknown text is solved. The proximity between texts is understood in the sense of the norm in L1 for the frequency vector of letter combinations, which is constructed for each fragment and for the text as a whole. The author of an unknown text is assigned the one whose standard is most often chosen as the closest for the set of fragments into which the text is divided. The length of the fragment is optimized based on the principle of the maximum difference in distances from fragments to standards in the problem of recognition of «friend–foe». The method was tested on the corpus of domestic and foreign (translated) authors. 1783 texts of 100 authors with a total volume of about 700 million characters were collected. In order to exclude the bias in the selection of authors, authors whose surnames began with the same letter were considered. In particular, for the letter L, the identification error was 12%. Along with a fairly high accuracy, this method has another important property: it allows you to estimate the probability that the standard of the author of the text in question is missing in the library. This probability can be estimated based on the results of the statistics of the nearest standards for small fragments of text. The paper also examines statistical digital portraits of writers: these are joint empirical distributions of the probability that a certain proportion of the text is identified at a given level of trust. The practical importance of these statistics is that the carriers of the corresponding distributions practically do not overlap for their own and other people’s standards, which makes it possible to recognize the reference distribution of letter combinations at a high level of confidence.

  4. Aksenov A.A., Kalugina M.D., Lobanov A.I., Kashirin V.S.
    Numerical simulation of fluid flow in a blood pump in the FlowVision software package
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1025-1038

    A numerical simulation of fluid flow in a blood pump was performed using the FlowVision software package. This test problem, provided by the Center for Devices and Radiological Health of the US. Food and Drug Administration, involved considering fluid flow according to several design modes. At the same time for each case of calculation a certain value of liquid flow rate and rotor speed was set. Necessary data for calculations in the form of exact geometry, flow conditions and fluid characteristics were provided to all research participants, who used different software packages for modeling. Numerical simulations were performed in FlowVision for six calculation modes with the Newtonian fluid and standard $k-\varepsilon$ turbulence model, in addition, the fifth mode with the $k-\omega$ SST turbulence model and with the Caro rheological fluid model were performed. In the first stage of the numerical simulation, the convergence over the mesh was investigated, on the basis of which a final mesh with a number of cells of the order of 6 million was chosen. Due to the large number of cells, in order to accelerate the study, part of the calculations was performed on the Lomonosov-2 cluster. As a result of numerical simulation, we obtained and analyzed values of pressure difference between inlet and outlet of the pump, velocity between rotor blades and in the area of diffuser, and also, we carried out visualization of velocity distribution in certain cross-sections. For all design modes there was compared the pressure difference received numerically with the experimental data, and for the fifth calculation mode there was also compared with the experiment by speed distribution between rotor blades and in the area of diffuser. Data analysis has shown good correlation of calculation results in FlowVision with experimental results and numerical simulation in other software packages. The results obtained in FlowVision for solving the US FDA test suggest that FlowVision software package can be used for solving a wide range of hemodynamic problems.

  5. Ablaev S.S., Makarenko D.V., Stonyakin F.S., Alkousa M.S., Baran I.V.
    Subgradient methods for non-smooth optimization problems with some relaxation of sharp minimum
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 473-495

    Non-smooth optimization often arises in many applied problems. The issues of developing efficient computational procedures for such problems in high-dimensional spaces are very topical. First-order methods (subgradient methods) are well applicable here, but in fairly general situations they lead to low speed guarantees for large-scale problems. One of the approaches to this type of problem can be to identify a subclass of non-smooth problems that allow relatively optimistic results on the rate of convergence. For example, one of the options for additional assumptions can be the condition of a sharp minimum, proposed in the late 1960s by B. T. Polyak. In the case of the availability of information about the minimal value of the function for Lipschitz-continuous problems with a sharp minimum, it turned out to be possible to propose a subgradient method with a Polyak step-size, which guarantees a linear rate of convergence in the argument. This approach made it possible to cover a number of important applied problems (for example, the problem of projecting onto a convex compact set). However, both the condition of the availability of the minimal value of the function and the condition of a sharp minimum itself look rather restrictive. In this regard, in this paper, we propose a generalized condition for a sharp minimum, somewhat similar to the inexact oracle proposed recently by Devolder – Glineur – Nesterov. The proposed approach makes it possible to extend the class of applicability of subgradient methods with the Polyak step-size, to the situation of inexact information about the value of the minimum, as well as the unknown Lipschitz constant of the objective function. Moreover, the use of local analogs of the global characteristics of the objective function makes it possible to apply the results of this type to wider classes of problems. We show the possibility of applying the proposed approach to strongly convex nonsmooth problems, also, we make an experimental comparison with the known optimal subgradient method for such a class of problems. Moreover, there were obtained some results connected to the applicability of the proposed technique to some types of problems with convexity relaxations: the recently proposed notion of weak $\beta$-quasi-convexity and ordinary quasiconvexity. Also in the paper, we study a generalization of the described technique to the situation with the assumption that the $\delta$-subgradient of the objective function is available instead of the usual subgradient. For one of the considered methods, conditions are found under which, in practice, it is possible to escape the projection of the considered iterative sequence onto the feasible set of the problem.

  6. Nikitiuk A.S.
    Parameter identification of viscoelastic cell models based on force curves and wavelet transform
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1653-1672

    Mechanical properties of eukaryotic cells play an important role in life cycle conditions and in the development of pathological processes. In this paper we discuss the problem of parameters identification and verification of viscoelastic constitutive models based on force spectroscopy data of living cells. It is proposed to use one-dimensional continuous wavelet transform to calculate the relaxation function. Analytical calculations and the results of numerical simulation are given, which allow to obtain relaxation functions similar to each other on the basis of experimentally determined force curves and theoretical stress-strain relationships using wavelet differentiation algorithms. Test examples demonstrating correctness of software implementation of the proposed algorithms are analyzed. The cell models are considered, on the example of which the application of the proposed procedure of identification and verification of their parameters is demonstrated. Among them are a structural-mechanical model with parallel connected fractional elements, which is currently the most adequate in terms of compliance with atomic force microscopy data of a wide class of cells, and a new statistical-thermodynamic model, which is not inferior in descriptive capabilities to models with fractional derivatives, but has a clearer physical meaning. For the statistical-thermodynamic model, the procedure of its construction is described in detail, which includes the following. Introduction of a structural variable, the order parameter, to describe the orientation properties of the cell cytoskeleton. Setting and solving the statistical problem for the ensemble of actin filaments of a representative cell volume with respect to this variable. Establishment of the type of free energy depending on the order parameter, temperature and external load. It is also proposed to use an oriented-viscous-elastic body as a model of a representative element of the cell. Following the theory of linear thermodynamics, evolutionary equations describing the mechanical behavior of the representative volume of the cell are obtained, which satisfy the basic thermodynamic laws. The problem of optimizing the parameters of the statisticalthermodynamic model of the cell, which can be compared both with experimental data and with the results of simulations based on other mathematical models, is also posed and solved. The viscoelastic characteristics of cells are determined on the basis of comparison with literature data.

  7. Yashina M.V., Tatashev A.G.
    Double-circuit system with clusters of different lengths and unequal arrangement of two nodes on the circuits
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 217-240

    We study a system that fulfills the class of driving systems developed by A. P. Buslaev (Buslaev networks). In this system, in each of two closed loops there is a segment called a cluster, and it moves at a constant speed if there are no delays. The lengths of the clusters are $l_1^{}$ and $l_2^{}$. There are two common points of the contours, called nodes. Delays in the movement of clusters are due to the fact that two clusters cannot pass through a node at the same time. The contours have the same height, the glazing is accepted. The nodes divide each contour into parts, the length of one of which is equal to $d_i^{}$, and the other $1-d_i^{}$, $i=1,\,2$, — contour number. Studies of the spectrum of average speeds of systems, i.\,e. set of pairs of results $(v_1^{},\,v_2^{})$, where $v_i^{}$ — cluster of average movement speed $i$ taking into account delays, for different initial states and fixed values $l_1^{}$, $l_2^{}$, $d_1^{}$, $d_2^{}$. 12 scenarios of system behavior have been identified, and for each of these manifestations sufficient conditions for its implementation have been found, and each of these observed spectra contains one or two pairs of average velocities.

  8. Govorkov D.A., Novikov V.P., Solovyev I.G., Tsibulsky V.R.
    Interval analysis of vegetation cover dynamics
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1191-1205

    In the development of the previously obtained result on modeling the dynamics of vegetation cover, due to variations in the temperature background, a new scheme for the interval analysis of the dynamics of floristic images of formations is presented in the case when the parameter of the response rate of the model of the dynamics of each counting plant species is set by the interval of scatter of its possible values. The detailed description of the functional parameters of macromodels of biodiversity, desired in fundamental research, taking into account the essential reasons for the observed evolutionary processes, may turn out to be a problematic task. The use of more reliable interval estimates of the variability of functional parameters “bypasses” the problem of uncertainty in the primary assessment of the evolution of the phyto-resource potential of the developed controlled territories. The solutions obtained preserve not only a qualitative picture of the dynamics of species diversity, but also give a rigorous, within the framework of the initial assumptions, a quantitative assessment of the degree of presence of each plant species. The practical significance of two-sided estimation schemes based on the construction of equations for the upper and lower boundaries of the trajectories of the scatter of solutions depends on the conditions and measure of proportional correspondence of the intervals of scatter of the initial parameters with the intervals of scatter of solutions. For dynamic systems, the desired proportionality is not always ensured. The given examples demonstrate the acceptable accuracy of interval estimation of evolutionary processes. It is important to note that the constructions of the estimating equations generate vanishing intervals of scatter of solutions for quasi-constant temperature perturbations of the system. In other words, the trajectories of stationary temperature states of the vegetation cover are not roughened by the proposed interval estimation scheme. The rigor of the result of interval estimation of the species composition of the vegetation cover of formations can become a determining factor when choosing a method in the problems of analyzing the dynamics of species diversity and the plant potential of territorial systems of resource-ecological monitoring. The possibilities of the proposed approach are illustrated by geoinformation images of the computational analysis of the dynamics of the vegetation cover of the Yamal Peninsula and by the graphs of the retro-perspective analysis of the floristic variability of the formations of the landscapelithological group “Upper” based on the data of the summer temperature background of the Salehard weather station from 2010 to 1935. The developed indicators of floristic variability and the given graphs characterize the dynamics of species diversity, both on average and individually in the form of intervals of possible states for each species of plant.

  9. Vassilevski Y.V., Simakov S.S., Gamilov T.M., Salamatova V.Yu., Dobroserdova T.K., Kopytov G.V., Bogdanov O.N., Danilov A.A., Dergachev M.A., Dobrovolskii D.D., Kosukhin O.N., Larina E.V., Meleshkina A.V., Mychka E.Yu., Kharin V.Yu., Chesnokova K.V., Shipilov A.A.
    Personalization of mathematical models in cardiology: obstacles and perspectives
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 911-930

    Most biomechanical tasks of interest to clinicians can be solved only using personalized mathematical models. Such models allow to formalize and relate key pathophysiological processes, basing on clinically available data evaluate non-measurable parameters that are important for the diagnosis of diseases, predict the result of a therapeutic or surgical intervention. The use of models in clinical practice imposes additional restrictions: clinicians require model validation on clinical cases, the speed and automation of the entire calculated technological chain, from processing input data to obtaining a result. Limitations on the simulation time, determined by the time of making a medical decision (of the order of several minutes), imply the use of reduction methods that correctly describe the processes under study within the framework of reduced models or machine learning tools.

    Personalization of models requires patient-oriented parameters, personalized geometry of a computational domain and generation of a computational mesh. Model parameters are estimated by direct measurements, or methods of solving inverse problems, or methods of machine learning. The requirement of personalization imposes severe restrictions on the number of fitted parameters that can be measured under standard clinical conditions. In addition to parameters, the model operates with boundary conditions that must take into account the patient’s characteristics. Methods for setting personalized boundary conditions significantly depend on the clinical setting of the problem and clinical data. Building a personalized computational domain through segmentation of medical images and generation of the computational grid, as a rule, takes a lot of time and effort due to manual or semi-automatic operations. Development of automated methods for setting personalized boundary conditions and segmentation of medical images with the subsequent construction of a computational grid is the key to the widespread use of mathematical modeling in clinical practice.

    The aim of this work is to review our solutions for personalization of mathematical models within the framework of three tasks of clinical cardiology: virtual assessment of hemodynamic significance of coronary artery stenosis, calculation of global blood flow after hemodynamic correction of complex heart defects, calculating characteristics of coaptation of reconstructed aortic valve.

  10. Abramov V.S., Petrov M.N.
    Application of the Dynamic Mode Decomposition in search of unstable modes in laminar-turbulent transition problem
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 1069-1090

    Laminar-turbulent transition is the subject of an active research related to improvement of economic efficiency of air vehicles, because in the turbulent boundary layer drag increases, which leads to higher fuel consumption. One of the directions of such research is the search for efficient methods, that can be used to find the position of the transition in space. Using this information about laminar-turbulent transition location when designing an aircraft, engineers can predict its performance and profitability at the initial stages of the project. Traditionally, $e^N$ method is applied to find the coordinates of a laminar-turbulent transition. It is a well known approach in industry. However, despite its widespread use, this method has a number of significant drawbacks, since it relies on parallel flow assumption, which limits the scenarios for its application, and also requires computationally expensive calculations in a wide range of frequencies and wave numbers. Alternatively, flow analysis can be done by using Dynamic Mode Decomposition, which allows one to analyze flow disturbances using flow data directly. Since Dynamic Mode Decomposition is a dimensionality reduction method, the number of computations can be dramatically reduced. Furthermore, usage of Dynamic Mode Decomposition expands the applicability of the whole method, due to the absence of assumptions about the parallel flow in its derivation.

    The presented study proposes an approach to finding the location of a laminar-turbulent transition using the Dynamic Mode Decomposition method. The essence of this approach is to divide the boundary layer region into sets of subregions, for each of which the transition point is independently calculated, using Dynamic Mode Decomposition for flow analysis, after which the results are averaged to produce the final result. This approach is validated by laminar-turbulent transition predictions of subsonic and supersonic flows over a 2D flat plate with zero pressure gradient. The results demonstrate the fundamental applicability and high accuracy of the described method in a wide range of conditions. The study focuses on comparison with the $e^N$ method and proves the advantages of the proposed approach. It is shown that usage of Dynamic Mode Decomposition leads to significantly faster execution due to less intensive computations, while the accuracy is comparable to the such of the solution obtained with the $e^N$ method. This indicates the prospects for using the described approach in a real world applications.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"